How tangible does something have to be to be classed as an example of tangible computing? If we were to be very literal about it we could argue that nearly all modern computing is tangible as we interact through the physical objects of (often) a keyboard and mouse, however things are a little more complex than that. In a 2001 article Brygg Ullmer and Hiroshi Ishii, two highly influential researchers in the field of tangible computing, lay down their framework for what constitutes a ‘tangible user interface’ (TUI). For Ullmer and Ishii a TUI should blur the boundaries between input and output devices, in illustration they contrast an abacus where the physical beads and rods allow numbers to be physical represented while also providing the means of controlling and manipulating them, with a mouse/keyboard and display screen where the mouse/keyboard provide the control and the screen provides the representation of the output.
This then raises the question as to whether the touch-screen represents a TUI. A touch screen definitely can be argued to blur the boundary between input and output as by providing input through touches or gestures on the screen the user gets output displayed on the very same screen. Of course the output is still just a visual representation unlike the abacus which provided both physical input and outputs, although a visual representation is often used in tangible interfaces for example the ‘tokens’ shown in the video of Ullmer’s thesis defence are physical objects that one manipulates to affect the output as shown on a display screen. Whether the touch-screen counts as a tangible interface seems to be debated in the field with some papers directly comparing a touch-screen interface with a tangible one while others incorporate the touch-screen in tangible applications. While the fact of touching something leans the touch-screen towards being a tangible interface especially given as Ullmer and Hiroshi point out the word ‘tangible’ is derived from the Latin for touch  however as El-Glaly et al note touch-screens still rely primarily on visual information, so even though the user may be manipulating something through touch there is nothing physical to grasp and this does seem to provide a limitation on how tangible touch-screens can really be.
So if the jury is out on touch-screens as tangible interfaces where does that leave a technology such as Wall++? The idea behind Wall++ is that an ‘intelligent wall’ can be created relatively cheaply using conductive paint, copper tape and a bit of circuitry. These ‘intelligent walls’ are able to sense objects in the room and can be used for interactivity through touch and gesture. The question arises though as to whether Wall++ is merely a sensor rather than a form of TUI. Wall++ is in essence a sensor but many other TUI applications wouldn’t be possible without sensing technology so this in and of itself is not a problem. For me the beauty of Wall++ is that the object sensing ability opens the possibility of turning everyday objects in a room into a form of tangible interface, no special ‘tokens’ would be required to interact with a particular technology just ordinary stuff.
The Wall++ application of tangible computing is very much a technology that is situated in the environment and this creates an interesting connection with the field of ubicomp. Wall++ is in the background and it is integrated it does not require one to interact with a special object to make it work it just picks up on ordinary things people do, this seems to fit very well with Weiser vision of seamless and integrated technology and hints at a potentially exciting future.
 Ullmer, B. and Ishii, H. (2001) ‘Emerging Frameworks for Tangible User Interfaces’ in Carroll, J.M.(ed.) Human-Computer Interaction in the New Millenium Boston, MA: Addison-Wesley pp. 579-601
 Dourish, P. (2001) Where the Action Is (Cambridge, MA: The MIT Press)
Author: Hattie Rowling