Digital Civics and the art of not solving the problem

For many years, HCI focused on usability and accessibility of work systems and their functions where research in laboratories was used to understand the usability of interfaces and design of new systems (Kutti and Bannon, 2014). However, certain traces of the humanistic view with its importance of context and research in situ and attempts of involving people in government work could also be seen in HCI in the 80s and even earlier. For example, anthropologist Lucy Suchman (1987) in her book Plans and Situated Actions, introducing the notion of situated action, was highlighting that plan-based models are not enough and that the world is much more complex than simply following the plan. And even then some researchers (e.g. Kling, 1978; Weiss et al., 1986; Sackman, 1968) touched upon the notion of how technologies can involve people in government work and democratic procedures (Vlachockoriosis et al., 2016; Corbet and Le Dantec, 2018). Although existing, until recently the interest towards interaction context had barely surfaced.

But with the development of ubiquitous computing, seamless technologies spread beyond our workplaces into our lives. The humanistic view in HCI research became more mainstream.  There has been, as some call it, a ‘Civic Turn’ in HCI (Johnson et al., 2017) – a growing interest towards understanding and designing sociotechnical systems that support engagement of citizens in a variety of civic life decision-making (Johnson et al., 2017, Peacock et al, 2018). And I believe that it is during this turn that the notion of Digital Civics was born within HCI, challenging the aspects of what Kutti and Bannon (2014) called an ‘Interaction Paradigm’ – research that studies issues outside of its context, treats that valuable context as static.

In this blogpost I discuss how Digital Civics research positions itself within HCI. To do this, I discuss the issues raised within the problem-solving approach, emphasising the importance of sense-making, caring about context, concluding with an example that demonstrates that sometimes projects that are labelled as ‘unsuccessful’ should be carried out and might even be necessary to challenge traditional notions and avoid remaining static. But what is important is to learn through these initiatives, modify our strategies and try to tackle issues that we faced.

Starting with the problems

Let’s start with what is HCI? And more to the point, what is good quality HCI research? The foregrounding of these questions seems to pinpoint an overriding insecurity that allows for a jockeying for position in terms of finding a unified approach.  We are introduced with different perspectives. But here I would like to discuss the perceptive that was offered by Oulasvirta and Hornbaek (2016) in their paper HCI Research as Problem-Solving– problem-solving perspective as to me it seems to show important questions that Digital Civics raises within the field of HCI.

This problem-solving approach has its roots firmly in Laudan’s (1978) philosophy of science. So, instead of gauging research approach, method or theory, they propose to assess the quality of research in terms of its problem-solving capacity – its ability to provide a solution, widely transferable and with a high level of confidence in its validity, to crucial aspects of HCI problems.

Adapting Laudan’s (1978) concept to the field of HCI, the paper attempts to position all HCI research into three proposed problem types: empirical, conceptual and constructive. Nice. It’s always good to have a framework, a ‘theory of everything’. But with a field so complex, can nebulous concepts be forced to fit into boxes so neatly labelled?

Is the problem-solving approach the answer to defining HCI research?

Proposed aspects of problem-solving capacity e.g. the significance of the problem addressed, the effectiveness and confidence of the proposed solution, seem to be important, and if critically reviewed, could potentially improve the quality of research. But the issue that won’t leave my mind after reading the paper is whether this approach is widely applicable to all HCI research, as claimed by the authors, or whether it’s a simple SWOT analysis-like tool, in their words a ‘thinking tool’ that can be employed for self-assessment by some individual studies only.

There is the danger that problem-solving approach can dismiss or label as ‘insignificant’ the work that tries to ‘define the problem scene’ rather than provide a solution to the problem. For example, Encinas et al. (2018) in their study of teen shoplifters explain that a wide range of design research aims to characterise the problem rather than solve it. High number of researchers employ design practices such as Critical Design or Design Fiction in order to critically evaluate common assumptions rather than provide a solution [ibid]. Although not providing solutions this type of research challenges the norms, raises doubts.

Ok, let’s assume that we do solve problems. Because we do: creating a community for breastfeeding women to share and review public breastfeeding places amongst themselves (Balaam et al., 2015), allowing people to create their own commissioning platforms (Grbett et al., 2016), providing opportunities for citizens to have professional film-making experience (Schofield et al., 2015).

So, how should we assess their value?

The authors say that we should evaluate our research in terms of ‘how it advances our ability to solveimportant problems relevant to human use of computers’. But in the case of Digital Civics we try to focus on complex problems that the traditional system could not address. To make more impact, it seems to me that Digital Civics should try to steer in and out of the system to have the most impact as it is impossible to ignore the system that we function in. But to challenge it, we need create initiatives that are quite radical – that do not correspond to norms. It seems appropriate to refer to the words of Geoff Mulgan (2014), Chief Executive of NESTA*. He said: ‘If you are too much in the system, you risk your radical edge. If you are too much outside, you might have little impact.’ Does this mean, according to problem-solving approach, if we challenge the system, and; thus, might have minimum impact, our research is NOT GOOD? Does it mean that radical initiatives should be avoided and we should remain static?

This problem-solving approach is in danger to create ‘solutionism’ – term that was introduced by Michael Dobbins and further developed by Evgeniy Morozov (2013). Solution focused research often creates problems that do not exist or simplifies problems in order to provide the solution. It seems to me, that Digital Civics challenges the problem-solving approach to HCI by ‘engaging’ with solutionism. Digital Civics tries to understand problems and elicit responses – challenging the traditional transactional system, rather than necessarily solve the problems. Put simply, raising the question whether we are able to provide effective solutions to problems when we might not yet fully understand what they are.

Big problem with Big data

I think the Big Data that is generated around us constantly, allows to look at the Big practical problem of the problem-solving approach and track how Digital Civics positions itself in the area of Big Data.

In the world of ubiquitous computing we can see (or rather we can’t) Big Data generated around us constantly. We can quantify-ourselves by tracking our sleep, calorie-intake, number of steps and even how many coffees we had this week. We can track our utility consumption, how many times we took the metro this month or look back at our personal history via Facebook’s ‘Year in Review’ feature.

Researchers aim to advance, develop and implement technologies to gather, store and analyse the data. Developers try to solve problems generating risk assessment algorithms that estimate the likelihood of a person committing another crime in the near future, affecting decisions about sentencing. Yes, eat your heart out Charlie Brooker. But even apart from the moral or philosophical concerns of situations that would seem like the stuff of Sci-Fi, do the developers concerned focus too much on problem-solving, looking at data in a vacuum and failing to consider the context it’s generated in? Isn’t this an important question to consider when our life is so driven by data?

Maybe researchers are starting to recognise importance of context. Different researchers aim to understand and apply context from their own perspective. For example, some focus on developing mobile recommender systems to provide personalised suggestions to users to enhance their experience developing new algorithms (e.g. Lathia, 2015; Pimenidis et al., 2018). Developing these algorithms researchers look at, what Dourish (2004) called as representational context: location, time of the day, weather, people around. Is this what we consider context?

It seems that boyd and Crawford (2012), looking at the context from a different perspective, states that Big Data is often minimised in order to be placed nicely in mathematical algorithms being developed. They urge that without its context data fails to have a meaning and; thus, loses its value. Numerous researchers use the data from social media websites such as Twitter or Facebook to draw conclusions regarding social networks illustrating people’s relationship – how strong their relationship is, in the form of a graph. This is relationship that for many years sociologist and anthropologists explored through traditional methods such as interviews or observations. When looking at these networks and communities, how can we draw conclusions about the importance, strength of relationship between people by, for example, just measuring the frequency of interaction (ibid.)? I might ‘like’, comment and share posts of my friend on Facebook – a colleague who posts interesting articles about parenthood but do not like any post of my sister. A colleague with who according to my phone location-based data, I spent a lot of the time together during the week. In silence. Can a context that is so important be this quantified data that’s generated by our smartphones or is it beyond the numbers generated by us constantly?

It seems that many researchers agree about the importance of context but it is the definition of context that seems to cause uncertainties. Seaver (2015) brings, in my view, an important point is that maybe these uncertainties should make us consider how ‘context’ is constructed by different people. They argue that it’s important to consider how ‘practices of big data themselves produce context in various and particular ways’ (Seaver, 2015: 1106-1107).

In my view Digital Civics along with some researchers from other HCI areas tries to raise this issue and apply a more critical approach considering people’s experience of a data-driven life. (e.g. Elsden et al., 2015; Puussaar et al., 2017). Digital Civics doesn’t see context of the data as a problem, instead the opposite, in my view, it brings the context to life giving data the meaning. It explores and questions issues that our transactional approach to data created.

Digital Civics challenges the notion of using citizens as simple data collectors looking for ways to enable citizens to actively collect the data themselves. As an example, Taylor et al. (2015)’s projects about data-in-place, data about a street’s environment was collectively gathered by residents themselves due to their concerns regarding the redevelopment plan to be raised to the council. This stressed the importance of residents’ active rather than passive involvement in data gathering (ibid.). In this way, the data-in-place concept introduces the idea of strengthening relations between people, data and places and how people and communities, rather than organisations, can interact with data.

Digital Civics also doesn’t reduce the notion of communities to simple meaningful graphs, instead, it mobilises them. It looks through how we can empower the citizens rather than the data. In the case of Spokespeople, Maskell et al. (2018), working with cycling advocates and local authorities, explored how cyclists can not only collect but also understand and utilise the data to carry out collective actions for a meaningful change. Making sense of data in this way, giving it in the hands of people and infrastructure community capacities materialises data into something meaningful; thereby increasing the data’s value.

It seems to me that in the above examples, these challenging approaches to data and its context where citizens are active actors, although might not solve specific problems, look at possibilities people can have through data collection practices and raise important concerns about ownership of the data created, the difficulty of the equal representation of the views of all stakeholders and the issues of abandonment and sustainability. Maybe these challenges, that probably were created by the problem-solving approach, need to be addressed by Digital Civics researchers further if we aim to design for social innovation.

Crowdsourcing: Engine of Innovation

If Digital Civics truly aims to design for social innovation, then we need to think of how we can create capacities among citizens that will help them to maintain the co-created infrastructure in the long run. It may seem controversial and far from ideal, but the story of Turkopticon, where crowdsourcing was utilised, is perhaps one we can learn something from.

The area of crowdwork has developed so swiftly that we’re spoilt for choice in terms of crowdsourcing platforms. Irani and Silberman [5] offer an analysis of one. Born in 2005 – Amazon Mechanical Turk (AMT) acts as a meeting point for requesters (employers) who want to outsource jobs split into micro tasks, and workers, also known as Turkers (politically incorrect as this may seem), who are usuallypaid to complete these tasks. AMT enables companies andresearchers to crowdsource their work. So, on the surface at least, it is an amazing match-making place where jobs can be done in a matter of hours, giving workers flexible working-hours and choice of task to complete.

AMT takes its name from the Mechanical Turk, a chess-playing automaton which faked the ability to compete with humans (Howe, 2006). But unfortunately, AMT borrowed the principle as well as the name. In the 18thcentury, Kempelen – creator of The Turk, had a chess master hidden inside the automaton performing the job that machine could not [ibid]. In the 21stcentury AMT have managed to make the workers – actually the whole crowd that performs the job, invisible. It is this invisibility that Irani and Silberman (2013) criticised in their paper.

So, how do you ‘hide’ the whole crowd? Amazon is doing a much better job at representing requesters on their platform than workers: workers’ ratings are used to filter and monitor access to jobs, requesters can decide whether they will pay for completed tasks with no obligation to say why, and with Turkers being contractors there are no minimum wage requirements (Irani and Silberman, 2013)  .

Many have raised ethical concerns in terms of this requester/worker inequality (e.g. Irani and Silberman, 2013; Gleibs, 2017; Salehi et al., 2015). Irani and Silberman (2013) were among a few who decided to respond to this concern by creating Turkopticon. Turkopticon is a browser extension for Firefox and Chrome which is aimed at enabling workers to assess requesters and provide feedback of their experiences; thus, making requesters responsible for their actions and providing support to each other.

Turkopticon was designed with an idea of advocating for the empowerment of workers. Ironically though, its creators seem to see their creation as something which has inadvertently, mainly thanks to media representation, depicted workers as helpless drones and themselves as ‘saviours’ (Irani and Silberman, 2016). Easy to understand how this might happen, where focus on concepts like ‘digital sweatshops’ have fostered the idea that creativity is alien to ‘low-skilled’ Turkers.

Should we be surprised?

Turkopticon was designed by placing the task on AMT asking workers to form ‘Worker’s Bill of Rights’. The results created five areas of assessment points according to which employers are rated (Irani and Silberman, 2013). Would perception of Turkopticon have been different if it was designed withTurkers rather than for them?

Yes, it may have. But in my view, the developers of Turkopticon learned their lesson. They reflected on the problem that raised its head, listened to the views of Turkers and changed their tactics.

In a way, although the initial design could hardly be considered participatory, Turkopticon creators have echoed Björgvinsson et al.’s (2010) idea of democratic innovation by creating controversies which do not only build publics but sustain and evolve them. They have created a Thing, initially an artefact that after being made public and given to its participants, designers in their own right, thereby raises concerns and possibilities for further exploration. The picture of a big infrastructure; communities of Turkers expanding the functionality of Turkopticon, moderators and maintainers who keep it afloat (Irani and Silberman, 2016), building the Dynamo activism platform with Turkers (Salehi, 2015) bringing Turkers inside the academia that produces knowledge about crowdsourcing (involving them in conferences e.g. CHI2014) (Irani and Silberman, 2016).

So is Turkopticon a kind of Frankenstein’s monster, something designed to render workers visible which contributed further to their invisibility? Maybe, but only, I would argue, if we see this monster as a necessary one, creating controversies and friction, providing workers with identity and ultimately stimulating change or at least debate.

Maybe the problem with the problem-solving approach is that when we tell our stories, we have an inherent desire to make them successful ones. Turkopticon, at a first glance through media representation could be seen as a successful solution that gives workers representation. But looking more closely, we realise that it doesn’t solve problems, but instead brings the issues in the system to light. In this case, the issue raised is the need for questioning how the researcher is positioned in a global context, and how this impacts on our initiatives.

The story of Turkopticon, in my view, is a good example that demonstrates that if we are to build a sustainable infrastructure we need not to hurry to solve problems. If we learn one thing it is that the process is a long one where we need to reflect constantly. It is important to learn also from our initiatives, even and perhaps especially from those that did not go to plan, rather than simply dubbing them as unsuccessful. Importantly, through this learning, we need to modify our strategies and create other initiatives taking issues further.


To me Digital Civics positions itself within HCI as a radical multi-discipline that challenges traditional notions. I believe it raises many questions in terms of problem solving approaches, helping us to avoid being overtaken by solutionism. I think that in focusing for design for social innovation, we need to remember that it is okay to carry out initiatives that might be labelled as ‘unsuccessful’. These projects can lead us to actually do exactly what we need to do, to identify the challenges that need addressing. But I believe it is important to decide on the next step. If we are to design for social innovation maybe our agenda is to also address these challenges further.


*NESTA – the National Endowment for Science Technology and the Arts


Madeline Balaam, Rob Comber, Ed Jenkins, Selina Sutton, and Andrew Garbett. 2015. FeedFinder: A Location-Mapping Mobile Application for Breastfeeding Women. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 1709-1718.

Erling Björgvinsson, Pelle Ehn, and Per-Anders Hillgren. 2010. Participatory design and “democratizing innovation”. In Proceedings of the 11th Biennial Participatory Design Conference (PDC ’10). ACM, New York, NY, USA, 41-50.

Danah boyd and Kate Crawford. 2012. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15/5: 662-679.

Eric Corbett and Christopher A. Le Dantec. 2018. Exploring Trust in Digital Civics. In Proceedings of the 2018 on Designing Interactive Systems Conference 2018, pp. 9-20. ACM.

Dourish P (2004) What we talk about when we talk about context. Personal and Ubiquitous Computing 8(1): 19–30.

Chris Elsden, David Kirk, Mark Selby, and Chris Speed. 2015. Beyond Personal Informatics: Designing for Experiences with Data. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15).ACM, New York, NY, USA, 2341-2344.

Enrique Encinas, Mark Blythe, Shaun Lawson, John Vines, Jayne Wallace, and Pam Briggs. 2018. Making Problems in Design Research: The Case of Teen Shoplifters on Tumblr. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18).ACM, New York, NY, USA, Paper 72.

Jeff Howe. 2006. The Rise of Crowdsourcing, Wired, 14(6), Available online at, 1-5, [Accessed on 6 November 2018].

Ilka H. Gleibs. 2017. Are all “research fields” equal? Rethinking practice for the use of data from crowdsourcing market places. Behavior Research Methods.49(4), 1333–1342.

Lilly C. Irani and M. Six Silberman. 2013. Turkopticon: interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 611-620.

Lilly C. Irani and M. Six Silberman. 2016. Stories We Tell About Labor: Turkopticon and the Trouble with “Design”. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4573-4586.

Ian G. Johnson, Alistair MacDonald, Jo Briggs, Jennifer Manuel, Karen Salt, Emma Flynn, and John Vines. 2017. Community Conversational: Supporting and Capturing Deliberative Talk in Local Consultation Processes. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2320-2333.

Rob Kling. 1978. Automated information systems as social resources in policy making. Proceedings of the 1978 annual conference-Volume 2, 666–674.

Kari Kuutti eta Liam J. Bannon. 2014. The Turn to Practice in HCI: Towards a Research Agenda. In CHI ’14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3543–3552.

Frank N. Laird. 1993. Participatory Analysis, Democracy, and Technological Decision Making. Science, Technology & Human Values 18, 3: 341–361.

Neal Lathia. 2015. The anatomy of mobile location-based recommender systems. In Recommender Systems Handbook, 493-510. Springer, Boston, MA.

Larry Laudan. 1978. Progress and its problems: Towards a theory of scientific growth. University of California Press.

Thomas Maskell, Clara Crivellaro, Robert Anderson, Tom Nappey, Vera Araújo-Soares, and Kyle Montague. 2018. Spokespeople: Exploring Routes to Action through Citizen-Generated Data. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Paper 405, 12 pages.

Evgeny Morozov. 2013. To save everything, click here: The folly of technological solutionism. Public Affairs.

Geoff Mulgan. 2014. The radical’s dilemma: an overview of the practice and prospects of Social and Public Labs – Version 1. Available online at  [Accessed on 28 December 2018].

Antti Oulasvirta and Kasper Hornbaek. 2016. HCI Research As Problem-Solving. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 4956–4967.

Sean Peacock, Robert Anderson, and Clara Crivellaro. 2018. Streets for People: Engaging Children in Placemaking Through a Socio-technical Process. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18).ACM, New York, NY, USA, Paper 327, 14 pages.

Elias Pimenidis, Nikolaos Polatidis, & Haralambos Mouratidis. 2018. Mobile recommender systems: Identifying the major concepts. Journal of Information Science

Aare Puussaar, Adrian Clear, and Peter Wright. 2017. Enhancing Personal Informatics Through Social Sensemaking. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ‘17), 6936–6942.

Harold Sackman. 1968. A public philosophy for real time information systems. In AFIPS ’68 (Fall, part II): Proceedings of the December 9-11, 1968, fall joint computer conference, 1491–1498.

Niloufar Salehi, Lilly C. Irani, and Michael S. Bernstein. 2015. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015), 1621–1630.

Guy Schofield, Tom Bartindale, and Peter Wright. 2015. Bootlegger: Turning Fans into Film Crew. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 767-776.

Nick Seaver. 2015. The nice thing about context is that everyone has it. Media, Culture & Society. 37/7. 1101-1109.

Lucy A. Suchman. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge university press.

Janet A Weiss , Judith E Gruber , Robert H Carver, Reflections on value: policy makers evaluate federal information systems, Public Administration Review, 46, p.497-505, Nov. 1986

Alex S. Taylor, Siân Lindley, Tim Regan, David Sweeney, Vasillis Vlachokyriakos, Lillie Grainger, and Jessica Lingel. 2015. Data-in-Place: Thinking through the Relations Between Data and Community. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). ACM, New York, NY, USA, 2863-2872.

Vasillis Vlachokyriakos, Clara Crivellaro, Christopher A Le Dantec, Eric Gordon, Pete Wright, and Patrick Olivier. 2016. Digital Civics. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems – CHI EA ’16 (CHI EA ’16), 1096– 1099.

Author: Irina Pavlovskaya

Leave a Reply

Your email address will not be published. Required fields are marked *