There are no computers, anywhere
When we interact with these technology filled boxes, I see that we are using the box as a proxy for the host of people that brought it into being. We use this proxy to attempt to understand & respond to the designers’ asking us to choose between which button to click, which document to view, who to email. These interactions form mini-conversations between ourselves and the designers, enacted through the proxy, that frustrate us  whenever we fail to achieve our goals.
Why can’t we talk this through?
As Suchman describes , our face-to-face conversations are not simplistic requests & responses. Suchman used ethnomethodology with conversation analysis to highlight conversations as highly complex, interactive sets of competencies that continually evolve based within the dynamic situations they are placed within. Filled with arm waving, facial expressions, silences, ums, ahs, errs that all have meanings.
Even with face-to-face conversations we often misunderstand one another. So when you place the designer and ‘Us’ half the world away, likely years apart, with no means to interact apart from a proxy that can only converse with text, point & click and simple sensors, then perhaps we should not be surprised when these conversations fail.
Why don’t they understand Us?
We are all unique, living in different places, with a myriad of backgrounds, beliefs, cultures and intentions. These coalesce to transform these identical technology-filled boxes into unique, bounded, cultural objects. The designers cannot design for 7 billion unique individuals so they have to aggregate Us down to a manageable set , collapsing away the Us in Us.
Suchman would frame their design tasks as a set of Situated Actions, immersed in the world that the designers are placed within at that time. As the designers aren’t vanilla humans, but have their own backgrounds, beliefs, cultures & intentions, so their Situated Actions become markedly distinct from our Situated Actions.
Can’t they just tell us what to do?
Designers often create a plan to handhold us through the box’s interface. The box is suited for this as these plans are abstract, sequenced and logical, i.e. they can be easily coded. Yet, these plans do not live in the box, they are situated in our world that is far from abstract, sequenced and logical. Suchman argues that we do not use these plans as the designers intend, but just as one resource from many that we call upon to achieve an end goal. The plan may guide Us but they do not control Us, we use them as a reference.
Can Candy Crush show us the way?
There is ubicomp out there that works. If you cannot think of an example, then maybe that interface has succeeded and disappeared from our perceived view . There are also some aspects of smartphones that are intuitive – Candy Crush didn’t earn $1billion a year  with an interface we couldn’t work with.
So perhaps we should research more into what does work to find out why it works, to use HCI to narrow the distance between Us and that proxy, so we can fade the interface away . With glimpses of that knowledge, we can start to make technology realise further on its potential.
- Paul Dourish and Scott D Mainwaring. 2012. Ubicomp ’ s Colonial Impulse. 133–142.
- Kristina Höök, Phoebe Sengers, and Gerd Andersson. 2003. Sense and sensibility: evaluation and interactive art. CHI ’03 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 5, 5: 241–248. https://doi.org/10.1145/642611.642654
- Lucy A. Suchman. 2007. Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
- Mark Weiser. 1991. The Computer for the 21st Century. Scientific American 265, 94–104. https://doi.org/10.1038/scientificamerican0991-94
- Peter Wright. 2011. Reconsidering the H , the C , and the I. Interactions 18, 5: 28. https://doi.org/10.1145/2008176.2008185
Author: Peter Glick