Navigating the high/low risk binary - a theoretical article

Last year I wrote a couple of thousand words about the different ways in which existing work looks at how risk influences perceptions of secure messaging. This literature is quite diverse and challenging to synthesise, spanning HCI, technical security, cryptography, critical security, Science and Technology Studies (STS), and surveillance studies. I presented these words at a 2023 STS confernence organised by STS Italia in Bologna. A couple critiques from the audience stood out:

  1. What about this existing data speaks to the self-perception of people in so-called risky social and political contexts?
  2. How are self-perceptions represented, and can they even be represented through boiler-plate qual and quant methodologies?
  3. Could these be scientific projections of what we assume about other people’s levels of risk?

In reality, the questions were a lot more sophisticated than this but I admit I do not speak STS well.

For some clarity, the literature on risk and secure messaging tends to divide people into high and low risk designations (particularly in security studies, HCI, and sociotechnical work). Low risk refers to people who are not going to be directly harmed by their information security practices - what people tend to think of as the ‘everyday person’. High risk people are thought to be the opposite - typical examples include: journalists, activists (my area and hence the motivation for the review), politicians, terrorists. What my work initially did was to review the results of studies that looked at high and low risk groups. Now I have rewritten the paper to look at some of the assumptions baked into this binary, some of the relational aspects of risk, and some thoughts on everyday security.

This is the most recent abstract of the article -

Information security research tends to divide people into high-risk and low-risk categories based on their perceived vulnerability to harm from digital practices. High-risk individuals, including activists and journalists, are thought to face immediate dangers such as targeted surveillance or physical harm. In contrast, low-risk individuals are assumed to engage in routine digital activities without significant threats to their security. This article critiques this binary classification within information security scholarship by exploring the conceptual and empirical instabilities inherent in this framework. Drawing on ideas from the interpretative social sciences, the article offers a disciplinary critique in three ways, 1) by bringing attention to relational aspects of risk; 2) by examining the normative underpinnings of this binary; 3) by discussing ideas of the everyday. I consider which aspects of risk this binary renders invisible and explore alternative ways of thinking about information security.

While this article started out as a literature review, the current form is more of a critical review. I will be presenting this paper as a work in progress at Looking for Everyday Security: A Cross-Disciplinary Workshop. The things I explore in this paper are as follows:

Relational Risk We do not have to look hard to see how the edges of high and low risk categories blur under scrutiny. When we say that some group constitutes a high-risk population, we imply a degree of naturalisation and boundedness. We frame this group were a discrete entity with some inherent characteristic inextricably linked to threat. Rather than a universal experience though, ideas of risk are created and maintained by a particular time and place. We can find numerous examples of how risk is linked to context, social relations, and subjective opinions.

What do labels do? Labelling across unrelated cases is an attempt to manage an unwieldy social reality. Since the results of studies depend on the specific group being studied, which is situated in a particular social context, this can create a sense of disarray for researchers trying to paint a single narrative. Trying to bring these perspectives together is trying to create one answer to a plural question. When technologists ask themselves ‘how do we design for at-risk people?’, this implies that there is a singular at-risk group where this is simply not the case. By labelling disparate case studies in one way (high-risk), we create and perpetuate the idea that there is one best way to incorporate this information into current design directions.

Everyday Security For many people, the things they do to create a sense of security in their everyday lives are just that, everyday. They can become taken for granted as people strive for an existence free from fear. There- fore, participants’ conceptions of risk may not follow a template of what researchers expect to hear about – instead of hearing about threats, harms and mitigations, they may hear about participants’ daily frustrations and everyday fears. By taking these everyday frustrations as a serious source of data, we can move away from ascriptive binaries and towards everyday security, studied in context.

I will link to the article as soon once I know that I can. In other boring literature review news, I am currently embarking on the big one for my thesis…




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • On the start of a new project in the Philippines...
  • Why study the UK climate movement from a digital security perspective?