The Map Is Not The Territory

In Science and Sanity¸ Alfred Korzybski remarks that “A map is not the territory it represents”. This very nicely encapsulates the idea that a model of reality is not, in and of itself, reality. Rather, its value is derived from how closely it resembles reality, that we may use the model to make purposeful decisions.

In recent years, this has become increasingly relevant. Automated decision-making processes are increasingly being incorporated into public life and service provision, corporate culture, and legal frameworks. The ubiquity of data and computation power make these innovations a tempting avenue to cut costs and introduce ‘objectivity’ into decision-making processes.

However, if our map doesn’t match the territory, or if we are uncritical about the assumptions and positions that are embedded into our technologies by the people who make them, we make ourselves vulnerable to a variety of issues. HCI and Digital Civics researchers must keep a critical eye on these developments, in order to produce innovations that work for everyone.

Reductionism

Imagine that we are attempting to build a completely automated seafaring ship. We need to produce software of some kind that can navigate the ship between ports. As with most any real-world problem, in the process of detailing the procedure for solving it to a computer, we must reduce the situation to simple, predictable processes. We might use GPS to allow the ship to track its location, and compare that to a map with its intended route plotted on it by a human, or perhaps even use some decision theory to allow the ship to chart the most efficient route by itself. We must also allow the software some means to control the physical mechanisms of the ship, in order that it can move.

If we are thinking ahead to avoid some anticipated problems, we might also include some data on busy shipping lanes, weather patterns to avoid, and so forth. We might include some manner of proximity sensors, in order to avoid collisions.

What we have created here is not a perfect, ‘objective’ system. In reducing the situation into base components, we the designers have made a series of value judgements on what parts of the situation are important. These decisions are influenced by the context in which we live, our domain knowledge, our assumptions and our values. No matter how cleverly we reduce a real-world situation, some information is likely to be lost, simply because we as designers did not know about it or deem it important. But supposing that we, being wise designers, are aware of this problem and put extraordinary effort into accounting for as many situations as possible in the plan we give to our software, can we create a plan that can always be adhered to?

Situatedness

According to Suchman’s (2006) idea of situated action, plans are not the be-all and end-all of human behaviour. Whilst we may make plans, and use them as tools to inform our actions, no plan we can (practically) create could possibly account for every situation. This is exemplified nicely by Edwin Hutchins’ (1983) ethnographic work concerning the wayfinding practices of Micronesian sailors.

Navigators of the European tradition chart an absolute course based on their data about the world in advance of the voyage, and then direct their efforts to remaining ‘on course’, and ‘correcting’ deviations from the plan as they arise. This contrasts to the Micronesian approach, wherein the journey begins with an objective rather than a plan, and contextual cues learned through experience (such as currents, the positions of certain stars, the presence of particular birds, and so forth) are used to identify the necessary direction of travel on a moment-by-moment basis.

It is worth noting that both of these approaches could be considered to use ‘data’ about the world. It is the approach to using that data in determining what actions to take that differs. The European approach privileges the course and charts as the ideal; the authority by which decisions are made and actions are taken. The Micronesian tradition takes a more situated approach; making decisions which are mindful of context and using the available navigational data (such as what currents or birds indicate the proximity of an island, or which way the stars move over the course of a night) in a more holistic fashion.

Importantly, inaccurate data is a problem for either approach, but one inaccurate data point is much less of a problem when thinking holistically. If, for example, a navigator in the Micronesian tradition misidentifies a constellation, they have other chances to notice the error based on the rest of the information available to them. They can notice incongruities and contradictions in what their data is telling them, and make a contextually-rooted decision as to how to proceed. If a European ship’s charts do not accurately reflect the territory the navigator is attempting to traverse, the entire voyage is flawed. No amount of correcting towards a course that does not reflect reality will help a ship reach its destination.

Now, let us consider again our entirely automated ship. Thanks to the simplifications necessary to create software capable of navigation, it has no capacity to act in a situated manner whatsoever. The nature of the majority of automated decision-making is the blind adherence to rules and plans. If this, then that. As disparaging as we have been, a ship being navigated in the European manner is still ultimately being operated by people, who can notice when they are on a collision course, or have run aground, or encountered some other unforeseen problem. To produce an automated navigator, we must either concoct a plan so perfectly comprehensive that the ship never encounters a situation that hasn’t been accounted for, or produce software which is capable of acting in a situated manner. Accomplishing either of these tasks requires herculean effort.

Machine learning may present an eventual solution to this problem, and neural networks can already behave in a manner intriguingly similar to situated navigation when performing simple tasks in constrained environments. If you have verified that you are a human when signing up for a web service recently, odds are good that you have contributed to the efforts to train computers to navigate roads and recognise traffic lights and street signs. But what happens when we try to automate decision-making surrounding problems that even humans struggle to conceptualise the full context of?

The Problem of Context

In 1991, Scientific American published an article by Mark Weiser titled The Computer for the 21st Century. In it, Weiser presents an idyllic vision of a world where “computers are so ubiquitous that no one will notice their presence”. It presents a story of Sal, a Silicon Valley single mother whose life is catered for by technology in a way that Rogers (2006) quite rightly likens to the Victorian gentry. We are presented with a world in which everyone can live the dream of the neoliberal nouveau riche – provided that your definition of “everyone” extends only to corporate executives in a region afflicted with stark income inequality.

Fast-forward to December 2016 when, in the wake of the US presidential election and EU referendum, a journalist named Carole Cadwalladr came across a company by the name of Cambridge Analytica. They had used suspect data processing methods in an attempt to influence these democratic processes, and were only discovered after the damage was done. How did we get here?

Bluntly, we got here because to a certain extent Weiser’s vision came true. We hadn’t noticed how much we relied on computers to receive information and construct our social lives, nor had we noticed how easily those systems could be influenced. Technology has become so much a part of our lives, and so interconnected, that even if we try to see the proverbial wood for the trees, perceiving the ecosystem of the whole forest is beyond us.

Science fiction author Iain M Banks’s novel Excession explores the idea of an “Outside Context Problem” – a problem whose emergence is unanticipated, and whose damage is hard to mitigate, simply because it has not occurred to society at large that it was possible for the issue to arise in the first place. Banks’ classical example is of a society encountering a vastly more technologically advanced civilisation with ambitions of colonisation, and his novels are fantastical, but perhaps an Outside Context Problem in the modern day need not be so dramatic.

The interlinked web of technologies that allow our society to function are becoming so complex that it is no longer reasonable to expect any one person to possess a working understanding of every problem that could emerge from it. None of us can say with certainty where the next problem on the scale of Cambridge Analytica will arise from, or how much damage it might do, or whether the technology they are designing will inadvertently allow it to happen.

The world we live in is a complex place, and there are many issues that may seem straightforward based on a designer’s context, knowledge, and experiences, but in reality are much less so. Take, for example, identifying a person’s gender. If one has never encountered information to the contrary, it may be tempting to consider this to be the kind of binary classification problem that machine learning excels at solving. Work by Keyes (2018) demonstrates that many within HCI as a field have historically been doing precisely that, but this model of gender does not reflect reality. In this case, the mismatch between the context the work is framed in and the reality of the broader cultural context it affects results in real harms to an already marginalised community.

The data is here to stay, and policy-makers are increasingly making decisions that affect real people based on that data, in automated or technologically mediated ways, in order to solve problems of scale or make the results ‘objective’. Both the Social Credit system and automated, proprietary “risk assessments” that attempt to calculate recidivism rates are being used to impact real people in serious ways in China and the United States, respectively.

These systems make our societies ever more vulnerable to bad data, reductive assumptions, the magnification of already existing personal and institutional prejudices, and unforeseeable issues emerging from outside our collective context. The question is: how do we as researchers address this?

Relational Technology

As I hope is evident by this point, any attempt on my part to unilaterally present the “one true solution” to these problems would be hypocritical in the extreme, being in itself an attempt to universally apply my own context to a hugely important issue. Such a statement, whilst punchy, would be far more likely to be part of the problem than a universally applicable solution.

With that being said, I consider the Digital Civics research agenda to hold a potential solution in applying the ideas of relational and empathetic service provision to the domain of technology design more broadly. I summarise my recommendations as keeping in mind the concepts of meaningful empowerment, meaningful transparency, and meaningful engagement when conducting research.

Meaningful Empowerment

It is crucial to consider a breadth of views, contexts, and assumptions about the state of the world when producing new technologies; particularly if those technologies are likely to have a broad societal impact, or impact on vulnerable or marginalised groups in particular. There is therefore a responsibility to give voice to individuals who may be affected, and to listen and respond meaningfully when they speak. Hamidi et al. (2018) provide an excellent example of consulting transgender individuals about the potential issues surrounding automated gender recognition, and providing actionable design recommendations based on their concerns.

Participatory design practices allow us to incorporate the contexts and values of people other than ourselves more directly into the technologies we create or facilitate. This reduces the risk of poor assumptions or harmful secondary effects slipping through the cracks, especially if an effort is made to consult with more vulnerable groups.

Current participatory design doesn’t present a complete solution, however. Getting certain especially vulnerable groups, such as those with more severe dementia, to participate in a meaningful way in the design of technological processes is an ongoing challenge. This is well demonstrated by the difficulties encountered by Lindsay et al. (2012), and future work should attempt to address the question of applying participatory design methodologies in these areas, or whether entirely different approach better suited to the task of embedding the values and contexts of these groups in the technologies which will affect them.

Meaningful Transparency

It is also important that we provide stakeholders with the means to challenge the outcomes of our research, especially if, as in the case of the automated recidivism risk assessments being used in some US states, they will be used to influence policy-making or other decisions of concern to citizens. If we insist on algorithmic decision-making in public life, the algorithms and the data they use should be as transparent as possible.

Current barriers to meaningful transparency include the focus on protecting proprietary software, and the technical know-how required to comprehend a ‘transparent’ data process that lacks any form of aid to accessibility or sense-making. The EU’s GDPR presents a good first step towards meaningful data transparency, but still suffers from practical problems of reducing complex processes to approachable, ‘plain speaking’ language.

Future work could help address this by investing in the development of initiatives that aim for approachable technical transparency, as well as methods and tools that aid those without extensive technical educations in understanding of and engagement with modern data-driven technologies.

Meaningful Engagement

Bluntly, no number of design recommendations will make a significant difference in the issues surrounding algorithmic decision making if policy-makers, industry, and communities are unaware of the issues surrounding these complex systems. It is the responsibility of the research community to avoid an ‘ivory tower’ mentality, and always be mindful of how we build relationships with our fellow citizens and our governments, and how we communicate our research beyond the academic community.

Real, lasting social innovation necessitates the building of connections between different and often conflicting interest groups and stakeholders. Work done by the MEDEA Collaborative Media Initiative, for example Hillgren (2011), demonstrates the power of social infrastructuring to build productive designing networks composed of diverse parties, as well as the fact that research institutions are uniquely placed to do the legwork of building such infrastructures.

Conclusion

Gaining full appreciation of the context of a societally impactful issue, and then acting fully in accordance with the interests of all parties, is challenging to the point of infeasibility for any single person. Given their issues acting in a situated, context-sensitive way, designing technological solutions that operate in this way is even more challenging. For the time being, people are the best context-parsing and data-checking tools we have, and so we should direct our efforts towards empowering them, not removing them in the search for false objectivity.

 

References

Korzybski, A. (1933) Science and Sanity: An Introduction to non-Aristotlean Systems and General Semantics, The International Non-Aristotlean Library Pub. Co.

Suchman, L., (2006) Human-Machine Reconfigurations, Cambridge University Press

Hutchins, E., (1983)  Understanding Micronesian navigation, D. Gentner & A. Stevens (Eds.), Mental models.  Hillsdale, NJ: Lawrence Erlbaum, pp 191-225, 1983

Weiser, M., The Computer for the 21st Century. Scientific American (1991) 94-104

Rogers, Y., 2006, Moving on from Weiser’s Vision of Calm Computing: Engaging UbiComp Experiences. Ubicomp 2006, LNCS 4206, pp. 404-421

Keyes, O., (2018) The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 88 (November 2018), 22 pages

Hamidi, S., Scheuerman, M. & Branham, S., Gender Recognition or Gender Reductionism? The Social Implications of Automatic Gender Recognition Systems, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper No. 8, Montreal QC, Canada, April 21 – 26, 2018, ACM Press

Lindsay, S. et al. (2012), Empathy, Participatory Design and People with Dementia, CHI ‘12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Pages 521-530, Austin, Texas, USA, May 5 – 10, 2012, ACM Press

Hillgren, P., Seravalli, A., & Anders, E., Prototyping and Infrastructuring in Design for Social Innovation, CoDesign Vol. 7, Nos. 3-4, September-December 2011, 169-183, Taylor & Francis

Image Courtesy of Martin Nemo, licensed under CC BY 2.0


Author: Adam Parnaby

Leave a Reply

Your email address will not be published. Required fields are marked *