CHI 2019 Annotated Bibliography (Part 1)

After the 2019 CHI conference (technically the ACM CHI Conference on Human Factors in Computing Systems) and blogging about our own paper on design approaches to privacy, I wanted to highlight other work that I found interesting or thought provoking in a sort of annotated bibliography. Listed in no particular order, though most relate to one or more themes that I’m interested in (privacy, design research, values in design practice, critical approaches, and speculative design).

(I’m still working through the stack of CHI papers that I downloaded to read, so hopefully this is part 1 of two or three posts).

  • James Pierce. 2019. Smart Home Security Cameras and Shifting Lines of Creepiness: A Design-Led Inquiry. Paper 45, 14 pages. https://doi.org/10.1145/3290605.3300275 — Pierce uses a design-led inquiry to illustrate and investigate three data practices of IoT products and services (digital leakage, hole-and-corner applications, and foot-in-the-door devices), providing some conceptual scaffolding for thinking about how privacy emerges differently in relation to varying technical (and social) configurations. Importantly, I like that Pierce is pushing design researchers to go beyond conceptualizing privacy as “creepiness”, through his exploration of three tropes of data practices.
  • Renee Noortman, Britta F. Schulte, Paul Marshall, Saskia Bakker, and Anna L. Cox. 2019. HawkEye – Deploying a Design Fiction Probe. Paper 422, 14 pages. https://doi.org/10.1145/3290605.3300652 — Building on Shulte’s concept of a “design probe,” Noortman et al. participants interact with a (beautifully designed!) control panel in the home over 3 weeks to act in the role of a caregiver in a design fiction about dementia care. The paper furthers the use of design fiction as a participatory and embodied experience, and as a data collection tool for research. The authors provide some useful reflections on the ways participants imagined and helped build out the fictional world in which they were participating.
  • Yaxing Yao, Justin Reed Basdeo, Smirity Kaushik, and Yang Wang. 2019. Defending My Castle: A Co-Design Study of Privacy Mechanisms for Smart Homes. Paper 198, 12 pages. https://doi.org/10.1145/3290605.3300428 — Yao et al. use co-design techniques to explore privacy concerns and potential privacy mechanisms with a range of participants (including diversity in age). Some interesting ideas arise from participants, such as creating an IoT “incognito mode,” as well as raising concerns about accessibility for these systems. Sometimes tensions arise, with participants wanting to trust IoT agents like Alexa as a ‘true friend’ who won’t spy on them, yet harboring some distrust of the companies creating these systems. I like that the authors point to a range of modalities for where we might place responsibility for IoT privacy – in the hardware, apps, platform policy, or operating modes. It’s a nice tie into questions others have asked about how responsibility for privacy is distributed, or what happens when we “handoff” responsibility for protecting values from one part of a sociotechnical system to another part.
  • Kristina Andersen and Ron Wakkary. 2019. The Magic Machine Workshops: Making Personal Design Knowledge. Paper 112, 13 pages. https://doi.org/10.1145/3290605.3300342 — Andersen and Wakkary outline a set of workshop techniques to help participants generate personal materials. I appreciate the commitments made in the paper, such as framing workshops as something that should benefit participants themselves, as well as researchers, in part by centering the workshop on the experience of individual participants. They propose a set of workshop elements; it’s nice to see these explicated here, as they help convey a lot of tacit knowledge about running workshops (the details of which are often abbreviated in most papers’ methods sections). I particularly like the “prompt” element to help provide a quick initial goal for participants to engage in while situating the workshop. While the example workshops used in the paper focus on making things out of materials, I’m curious if some of the outlined workshop elements might be useful in other types of workshop-like activities.
  • Laura Devendorf, Kristina Andersen, Daniela K. Rosner, Ron Wakkary, and James Pierce. 2019. From HCI to HCI-Amusement: Strategies for Engaging what New Technology Makes Old. Paper 35, 12 pages. https://doi.org/10.1145/3290605.3300265 – Devendorf et al. start by (somewhat provocatively) asking what it might be like to explore a “non-contribution” in HCI. The paper walks through a set of projects and works its way to a set of reflections about the norms of HCI research focusing on the “technological new,” asking what it might mean instead to take the present or the banal more seriously. The paper also starts to ask what types of epistemologies are seen as legitimate in HCI. The paper calls for “para-research” within HCI as a way to focus attention on what is left out or unseen through dominant HCI practices.
  • Colin M. Gray and Shruthi Sai Chivukula. 2019. Ethical Mediation in UX Practice. Paper 178, 11 pages. https://doi.org/10.1145/3290605.3300408 – Through a set of case study observations and interview, Gray and Chivukula study how ethics are conducted in practice by UX designers. The paper provides a lot of good detail about ways UX designers bring ethics to the forefront and some of the challenges they face. The authors contribute a set of relationships or mediators, connecting individual designers’ practices to organizational practices to applied ethics.
  • Sarah E. Fox, Kiley Sobel, and Daniela K. Rosner. 2019. Managerial Visions: Stories of Upgrading and Maintaining the Public Restroom with IoT. Paper 493, 15 pages. https://doi.org/10.1145/3290605.3300723 – Through interviews, participant observations, and analysis of media materials, Fox et al. investigate managerial labor in regulating access to public bathroom resources. They craft a story of regulation (in a broad sense), about how the bathroom’s management is entangled among local politics and on-the-ground moral beliefs, corporate values, imagined future efficiencies through technology, and strategic uses of interior and technological design. This entanglement allows for particular types of control, allowing some access to resources and making it harder for others.
  • William Gaver, Andy Boucher, Michail Vanis, Andy Sheen, Dean Brown, Liliana Ovalle, Naho Matsuda, Amina Abbas-Nazari, and Robert Phillips. 2019. My Naturewatch Camera: Disseminating Practice Research with a Cheap and Easy DIY Design. Paper 302, 13 pages. https://doi.org/10.1145/3290605.3300532 – Gaver et al. detail a DIY nature camera, shown in partnership with a BBC television series and built by over 1000 people. Interestingly, while similar tools could be used for citizen science efforts, the authors are clear that they are instead trying to create a type of public engagement with research that focuses on creating more intimate types of encounters, and engaging people with less technical expertise in making. The cameras help create intimate “encounters” with local wildlife (plus the paper includes some cute animal photos!).
  • Sandjar Kozubaev, Fernando Rochaix, Carl DiSalvo, and Christopher A. Le Dantec. 2019. Spaces and Traces: Implications of Smart Technology in Public Housing. Paper 439, 13 pages. https://doi.org/10.1145/3290605.3300669 — Kozubaev et al.’s work adds to a growing body of work questioning and reframing what the “home” means in relation to smart home technology. The authors conduct design workshops with residents (and some managers) in US public housing, providing insight into housing situations where (1) the “home” is not a single-family middle class grouping, and (2) the potential end users of smart home technologies may not have control or consent over the technologies used, and are already subject to various forms of state surveillance.
  • Shruthi Sai Chivukula, Chris Watkins, Lucca McKay, and Colin M. Gray. 2019. “Nothing Comes Before Profit”: Asshole Design In the Wild. Paper LBW1314, 6 pages. https://doi.org/10.1145/3290607.3312863 — This late breaking work by Chivukala et al investigates the /r/assholedesign subreddit to explore the concept of “asshole design,” particularly in comparison to the concept of “dark patterns.” They find that asshole design uses some dark pattern strategies, but that dark patterns tend to trick users into doing certain things, while asshole design often restricts uses of products and more often include non-digital artifacts. I think there may be an interesting future regulatory discussion about asshole design (and dark patterns). On one hand, one might consider whether dark pattern or asshole design practices might fit under the FTC’s definition of “unfair and deceptive practices” for possible enforcement action against companies. On the other, as some legislators are introducing bills to ban the use of dark patterns – it becomes very important to think carefully about how dark patterns are defined, and what might get included and excluded in those definitions; the way that this work suggests a set of practices related to, but distinct from, dark patterns could help inform future policy discussions.
Advertisements

Where’s the Rest of Design? Or, Bringing Design to the Privacy Table: Broadening “Design” in “Privacy by Design” Through HCI [Paper Talk]

This post is based on a talk given at the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019), in Glasgow, UK. The full research paper by Richmond Wong and Deirdre Mulligan that the talk is based on, “Bringing Design to the Privacy Table: Broadening “Design” in “Privacy by Design” Through the Lens of HCI” can be found here: [Official ACM Version] [Open Access Pre-Print Version]

In our paper “Bringing Design to the Privacy Table: Broadening Design in Privacy by Design,” we conduct a curated literature review to make two conceptual argument arguments:

  1. There is a broad range of design practices used in human computer interaction (HCI) research which have been underutilized in Privacy By Design efforts.
  2. Broadening privacy by design’s notion of what “design” can do can help us more fully address privacy, particularly in situations where we don’t yet know what concepts or definitions of privacy are at stake.

But let me start with some background and motivation. I’m both a privacy researcher—studying studying how to develop technologies that respect privacy—and I’m a design researcher, who designs things to learn about the world.

I was excited several years ago to hear about a growing movement called “Privacy By Design,” the idea that privacy protections should be embedded into products and organizational practice during the design of products, rather than trying to address privacy retroactively. Privacy By Design has been put forward in regulatory guidance from the US and other countries, and more recently by the EU’s General Data Protection Regulation. Yet these regulations don’t provide a lot of guidance about what Privacy By Design means in practice.

In interactions with and field observations of the interdisciplinary Privacy By Design community—including lawyers, regulators, academics, practitioners, and technical folks—I’ve  found that there is a lot of recognition of the complexity of privacy: that it’s an essentially contested concept, there are many conceptualizations of privacy; privacy from companies is different than privacy from governments; there are different privacy harms, and so forth.

PowerPoint Slide Show - [Bringing Privacy to the Table CHI Talk 2.0.pptx] 5_10_2019 10_14_25 AM

Privacy by Design conceptualizes “design” in a relatively narrow way

But the discussion of “design” seems much less complex. I had assumed Privacy By Design would have meant applying HCI’s rich breadth of design approaches toward privacy initiatives – user centered design, participatory design, value sensitive design, speculative design, and so on.

Instead, design seemed to be used narrowly, as either a way to implement the law via compliance engineering, or to solve specific privacy problems. Design was largely framed as a deductive way to solve a problem, using approaches such as encryption techniques or building systems to comply with fair information practices. While these are all important and necessary privacy initiatives, but I kept finding myself asking, “where’s the rest of design?” Not just the deductive problem solving aspects of design, but also its the inductive, exploratory, and forward looking aspects.

PowerPoint Slide Show - [Bringing Privacy to the Table CHI Talk 2.0.pptx] 5_10_2019 10_16_05 AM
There’s an opportunity for Privacy By Design to make greater use of the breadth of design approaches used in HCI

There’s a gap here between the way the Privacy By Design views design and the way the HCI community views design. Since HCI researchers and practitioners are in a position to help support or implement privacy by design initiatives, it’s important to try to help broaden the notion of design in Privacy By Design to more fully bridge this gap.

So our paper aims to fulfill 2 goals:

  1. Design in HCI is more than just solving problems. We as HCI privacy researchers can more broadly engage the breadth of design approaches in HCI writ large. And there are opportunities to build connections among the HCI privacy research community and HCI design research community & research through design community to use design in relation to privacy in multiple ways.
  2. Privacy By Design efforts risk missing out on the full benefits that design can offer if it sticks with a narrower solution and compliance orientation to design. From HCI, we can help build bridges with interdisciplinary Privacy By Design community, and engage them in understanding a broader view of design.  

So how might we characterize the breadth of ways that HCI uses design in relation to privacy? In the paper, we conduct a curated review of HCI research to explore and breadth and richness of how design practices are used in relation to privacy. We searched for HCI papers that use both the terms “privacy” and “design,” curating a corpus of 64 papers. Reading through each paper, we openly coded each one by asking a set of questions including: Why is design used; who is design done by; and for whom is design done? Using affinity diagramming on the open codes, we came up with a set of categories, or dimensions, which we used to re-code the corpus. In this post I’m going to focus on the dimensions that emerged when we looked at the “why design?” question, which we call the purposes of design.

PowerPoint Slide Show - [Bringing Privacy to the Table CHI Talk 2.0.pptx] 5_10_2019 10_15_10 AM

We use 4 purposes to discuss the breadth of reasons why design might be used in relation to privacy

 We describe 4 purposes of design. They are:

  • Design to solve a privacy problem;
  • Design to inform or support privacy;
  • Design to explore people and situations; and
  • Design to critique, speculate, and present critical alternatives.

Note that we use these to talk about how design has been used in privacy research specifically, not about all design writ large (that would be quite a different and broader endeavor!). In practice these categories are not mutually exclusive, and are not the only way to look at the space, but looking at them separately helps give some analytical clarity.  Let’s briefly walk through each of these design purposes.

To Solve a Privacy Problem

First, design is seen as a way to solve a privacy problem – which occurred most often in the papers we looked at. And I think this is often how we think about design colloquially, as a set of practices to solve problems. This is often how design is discussed in Privacy By Design discussions as well.

When viewing design in this way, privacy is presented a problem that has already been well-defined at the before the design process, and a solution is designed to address that definition of the problem. A lot of responsibility for protecting privacy here is thus placed in the technical system.

For instance, if a problem of privacy is defined as the harms that result from long term data processing and aggregation, we might design a system that limits data retention. If a problem of privacy is defined as not being identified, we might design a system to be anonymous.

To Inform or Support Privacy

Second, design is seen as a way to inform or support actors who must make privacy-relevant choices, rather than solving a privacy problem outright. This was also common in our set of papers.  Design to inform or support privacy views problems posed by privacy as an information or tools problem. If users receive information in better ways, or have better tools, then they can make more informed choices about how to act in privacy-preserving ways.

A lot of research has been done on how to design usable privacy policies or privacy notices – but it’s still up to the user to read the notice and make a privacy relevant decision. Other types of design work in this vein includes designing privacy icons, controls, dashboards, visualizations, as well as educational materials and activities.

In these approaches, a lot of responsibility for protecting privacy is placed in the choices that people make, informed by a design artifact. The protection of privacy doesn’t arise from the design of the system itself, but rather by how a person chooses to use the system. This orientation towards privacy fits well with US regulations around privacy that make individuals manage and control their own data.

To Explore People and Situations (Related to Privacy)

Third is using design to explore people and situations. Design is used as a mode of inquiry, to better understand what privacy or the experience of privacy means to certain people, in certain situations. Design here is not necessarily about solving an immediate problem.

Techniques like design probes or collaborative design workshops are some approaches here. For example, a project I presented at CSCW 2018 involved presenting booklets with conceptual designs of potentially invasive products to technology practitioners in training. We weren’t looking to gather feedback in order to develop these conceptual ideas into usable products. Instead, the goal was to use these conceptual design ideas as provocations to better understand the participants’ worldviews. How are they conceptualizing privacy when they see these designs? How do their reactions help us understand where they place responsibility for addressing privacy?

Here, privacy is understood as a situated experience, which emerges from practices from particular groups in specific contexts or situations. The goal is less about solving a privacy problem, and more about understanding how privacy gets enacted and experienced.

To Critique, Speculate, or Present Critical Alternatives About Privacy

Fourth is design to critique, speculate, or present critical alternatives. (By critical I don’t mean bad or mean, but instead I mean critical like reflexive reflection or careful analysis).  Design here is not about exploring the world as it is, but focuses on how the world could be. Often this consists of creating create conceptual designs that provoke, to create a space to surface and discuss social values. These help us discuss worlds we might strive to achieve or ones we want to avoid. Privacy in this case is situated in different possible sociotechnical configurations of the world, thinking about privacy’s social, legal, and technical aspects together.

For example, in a project I presented at DIS 2017, we created advertisements for fictional sensing products, like a bodily implant for workplace employees. This helped us raise questions beyond basic data collection and use ones. The designs helped us ask questions about how is privacy implicated in the workplace, or through employment law? Can consent really occur with these power dynamics? It also helped us ask normative questions, such as: Who gets to have privacy and who doesn’t? Who or what should be responsible for protecting privacy? Might we look to technical design, to regulations, to market mechanisms, or to individual choice to protect privacy?

Design Is a Political, Values-Laden Choice

So in summary these are the 4 purposes of design that we identified in this paper: using design to solve, to inform and support, to explore, and to critique and speculate. Again, in practice, they’re not discrete categories. Many design approaches, like user centered design, or participatory design, use design for multiple design purposes.

PowerPoint Slide Show - [Bringing Privacy to the Table CHI Talk 2.0.pptx] 5_10_2019 10_27_56 AM

Using design in different ways suggests different starting points for how we might think about privacy

But this variety of purposes for how design relates to privacy is also a reminder that design isn’t a neutral process, but is itself political and values-laden. (Not political in terms of liberal and conservative, but political in the sense that there is power and social implications in the choices we make about how to use design). Each design purpose suggests a different starting place for how we orient ourselves towards conceptualizing and operationalizing privacy. We might think about privacy as:

  • a technical property;
  • as a user-made choice;
  • as situated experiences;
  • as privacy as sociotechnically situated.

Privacy can be many and all of these things at once, but the design methods we choose, and the reasons why we choose to use design helps to suggest or foreclose different orientations toward privacy. These choices also suggest that responsibility for privacy might be placed in different places — such as in a technical system, in a person’s choices, in a platform’s policies, in the law, in the market, and so forth.

PowerPoint Slide Show - [Bringing Privacy to the Table CHI Talk 2.0.pptx] 5_10_2019 10_15_39 AM

Research using design to solve and design to inform and support appeared more often in the papers that we looked at

Now I’ve been discussing these 4 design purposes equally, but they weren’t equal in our corpus. Allowing each paper to be coded for multiple categories, a little over half the papers we looked at used design to solve a privacy problem and a little over half used design to inform or support. Less than a quarter used design to explore; even fewer used design to critique and speculate. We don’t claim that the exact percentages are representative of all the privacy literature, but there’s a qualitative difference here, where most of the work we reviewed uses design to solve privacy problems or support and inform privacy.

We are arguing for a big tent approach in privacy by design: using design in all of these ways helps us address a broader set of conceptions of privacy.

This suggests that there’s an opportunity for us to build bridges between the HCI privacy research community, which has rich domain expertise; and the HCI design research & research through design communities, which have rich design methods expertise, particularly using design in ways to explore, and to critique and speculate.

So that’s Argument 1, that we have the opportunity to build new bridges among HCI communities to more fully make use of each others’ expertise, and a broader range of design methods and purposes.

Argument 2 is that Privacy By Design has largely (with some exceptions) thought about design as a problem solving process.  Privacy By Design research and practice could expand on that thinking of design to make more use of a fuller breadth of uses of design that are reflected in HCI.

Implications for Design Collaboration

So what might some of these collaborations within and across fields look like, if we want to make use of more of design’s breadth? For example if we as privacy researchers, develop a set of usable privacy tools to inform and support most people’s privacy decision making; that might be complemented with design to explore so that we can better understand the often marginalized populations for whom those tools don’t work. For instance Diana Freed et al.’s work shows that social media privacy and security tools can be used against victims of intimate partner violence, violating their privacy and safety. Or, an emerging set of problems we face is thinking about privacy in physically instrumented spaces: how does consent work, what conceptions of privacy and privacy risk are at play? We can complement design to solve and design to support efforts with design to critique and speculate; to craft future scenarios that try to understand what concepts of privacy might be at play, and how privacy can surface differently when technical, social, or legal aspects of the world change.

From a design research perspective, I think there’s growing interest in the design research community to create provocative artifacts to try to surface discussions about privacy, particularly in relation to new and emerging technologies. Critically reflecting on my own design research work, I think it can be tempting to just speak to other designers and resort to conceptions of privacy that say “surveillance is creepy” and not dig deeper into other approaches to privacy. But by collaborating with privacy researchers, we can bring more domain expertise and theoretical depth to these design explorations and speculations, and engage a broader set of privacy stakeholders.

Industry privacy practitioners working on privacy by design initiatives might consider incorporating more UX researchers and designers form their organizations, as privacy allies and as design experts.  Approaches that use design to critique and speculate may also align well with privacy practitioners’ stated desire to find contextual and anticipatory privacy tools to help “think around corners”, as reported by Ken Bamberger and Deirdre Mulligan.

Privacy By Design regulators could incorporate more designers (in addition to engineers and computer scientists) in regulatory discussions about privacy by design, so that this richness of design practice isn’t lost when the words “by design” are written in the law.

Moreover, there’s an opportunity here for us an HCI community to bring HCI’s rich notions of what design can mean to Privacy By Design, so that beyond being a problem solving process, it is also seen as a process that also makes use of the multi-faceted, inductive, and exploratory uses of design that this community engages in.


 

Paper Citation: Richmond Y. Wong and Deirdre K. Mulligan. 2019. Bringing Design to the Privacy Table: Broadening “Design” in “Privacy by Design” Through the Lens of HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, Paper 262, 17 pages. DOI: https://doi.org/10.1145/3290605.3300492

Exploring Implications of Everyday Brain-Computer Interface Adoption through Design Fiction

This blog post is a version of a talk I gave at the 2018 ACM Designing Interactive Systems (DIS) Conference based on a paper written with Nick Merrill and John Chuang, entitled When BCIs have APIs: Design Fictions of Everyday Brain-Computer Interface Adoption. Find out more on our project page, or download the paper: [PDF link] [ACM link]

In recent years, brain computer interfaces, or BCIs, have shifted from far-off science fiction, to medical research, to the realm of consumer-grade devices that can sense brainwaves and EEG signals. Brain computer interfaces have also featured more prominently in corporate and public imaginations, such as Elon Musk’s project that has been said to create a global shared brain, or fears that BCIs will result in thought control.

Most of these narratives and imaginings about BCIs tend to be utopian, or dystopian, imagining radical technological or social change. However, we instead aim to imagine futures that are not radically different from our own. In our project, we use design fiction to ask: how can we graft brain computer interfaces onto the everyday and mundane worlds we already live in? How can we explore how BCI uses, benefits, and labor practices may not be evenly distributed when they get adopted?

Continue reading →

Interrogating Biosensing Privacy Futures with Design Fiction (video)

 

I presented this talk in November 2017, at the Berkeley I School PhD Research Reception. The talk discusses findings from 2 of our papers:

Richmond Y. Wong, Ellen Van Wyk and James Pierce. (2017). Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS ’17). https://escholarship.org/uc/item/7r229796

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce and John Chuang. (2017). Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proceedings of the ACM Human Computer Interaction (CSCW 2018 Online First). 1, 2, Article 111 (November 2017), 27 pages. https://escholarship.org/uc/item/78c2802k

More about this project and some of the designs can be found here: biosense.berkeley.edu/projects/sci-fi-design-fiction/

Speculative and Anticipatory Orientations Towards the Future

This is part 3 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 1, part 2, or find out more about the project on the project page or download the full paper

After doing a close reading and analyzing the concept videos for Google Glass (a pair of glasses with a heads up display) and Microsoft HoloLens (a pair of augmented reality goggles), we also looked at media reaction to these videos and these products’ announcements.

After both concept videos were released, media authors used the videos as a starting point to further imagine the future world with Glass and HoloLens, and the implications of living in those worlds. Yet they portrayed the future in two different ways: some discussed the future by critiquing the world depicted in the companies’ concept videos, while others accepted the depicted worlds. We distinguish between these two orientations, terming them speculative and anticipatory.

Continue reading →