CHI 2019 Annotated Bibliography (Part 1)

After the 2019 CHI conference (technically the ACM CHI Conference on Human Factors in Computing Systems) and blogging about our own paper on design approaches to privacy, I wanted to highlight other work that I found interesting or thought provoking in a sort of annotated bibliography. Listed in no particular order, though most relate to one or more themes that I’m interested in (privacy, design research, values in design practice, critical approaches, and speculative design).

(I’m still working through the stack of CHI papers that I downloaded to read, so hopefully this is part 1 of two or three posts).

  • James Pierce. 2019. Smart Home Security Cameras and Shifting Lines of Creepiness: A Design-Led Inquiry. Paper 45, 14 pages. https://doi.org/10.1145/3290605.3300275 — Pierce uses a design-led inquiry to illustrate and investigate three data practices of IoT products and services (digital leakage, hole-and-corner applications, and foot-in-the-door devices), providing some conceptual scaffolding for thinking about how privacy emerges differently in relation to varying technical (and social) configurations. Importantly, I like that Pierce is pushing design researchers to go beyond conceptualizing privacy as “creepiness”, through his exploration of three tropes of data practices.
  • Renee Noortman, Britta F. Schulte, Paul Marshall, Saskia Bakker, and Anna L. Cox. 2019. HawkEye – Deploying a Design Fiction Probe. Paper 422, 14 pages. https://doi.org/10.1145/3290605.3300652 — Building on Shulte’s concept of a “design probe,” Noortman et al. participants interact with a (beautifully designed!) control panel in the home over 3 weeks to act in the role of a caregiver in a design fiction about dementia care. The paper furthers the use of design fiction as a participatory and embodied experience, and as a data collection tool for research. The authors provide some useful reflections on the ways participants imagined and helped build out the fictional world in which they were participating.
  • Yaxing Yao, Justin Reed Basdeo, Smirity Kaushik, and Yang Wang. 2019. Defending My Castle: A Co-Design Study of Privacy Mechanisms for Smart Homes. Paper 198, 12 pages. https://doi.org/10.1145/3290605.3300428 — Yao et al. use co-design techniques to explore privacy concerns and potential privacy mechanisms with a range of participants (including diversity in age). Some interesting ideas arise from participants, such as creating an IoT “incognito mode,” as well as raising concerns about accessibility for these systems. Sometimes tensions arise, with participants wanting to trust IoT agents like Alexa as a ‘true friend’ who won’t spy on them, yet harboring some distrust of the companies creating these systems. I like that the authors point to a range of modalities for where we might place responsibility for IoT privacy – in the hardware, apps, platform policy, or operating modes. It’s a nice tie into questions others have asked about how responsibility for privacy is distributed, or what happens when we “handoff” responsibility for protecting values from one part of a sociotechnical system to another part.
  • Kristina Andersen and Ron Wakkary. 2019. The Magic Machine Workshops: Making Personal Design Knowledge. Paper 112, 13 pages. https://doi.org/10.1145/3290605.3300342 — Andersen and Wakkary outline a set of workshop techniques to help participants generate personal materials. I appreciate the commitments made in the paper, such as framing workshops as something that should benefit participants themselves, as well as researchers, in part by centering the workshop on the experience of individual participants. They propose a set of workshop elements; it’s nice to see these explicated here, as they help convey a lot of tacit knowledge about running workshops (the details of which are often abbreviated in most papers’ methods sections). I particularly like the “prompt” element to help provide a quick initial goal for participants to engage in while situating the workshop. While the example workshops used in the paper focus on making things out of materials, I’m curious if some of the outlined workshop elements might be useful in other types of workshop-like activities.
  • Laura Devendorf, Kristina Andersen, Daniela K. Rosner, Ron Wakkary, and James Pierce. 2019. From HCI to HCI-Amusement: Strategies for Engaging what New Technology Makes Old. Paper 35, 12 pages. https://doi.org/10.1145/3290605.3300265 – Devendorf et al. start by (somewhat provocatively) asking what it might be like to explore a “non-contribution” in HCI. The paper walks through a set of projects and works its way to a set of reflections about the norms of HCI research focusing on the “technological new,” asking what it might mean instead to take the present or the banal more seriously. The paper also starts to ask what types of epistemologies are seen as legitimate in HCI. The paper calls for “para-research” within HCI as a way to focus attention on what is left out or unseen through dominant HCI practices.
  • Colin M. Gray and Shruthi Sai Chivukula. 2019. Ethical Mediation in UX Practice. Paper 178, 11 pages. https://doi.org/10.1145/3290605.3300408 – Through a set of case study observations and interview, Gray and Chivukula study how ethics are conducted in practice by UX designers. The paper provides a lot of good detail about ways UX designers bring ethics to the forefront and some of the challenges they face. The authors contribute a set of relationships or mediators, connecting individual designers’ practices to organizational practices to applied ethics.
  • Sarah E. Fox, Kiley Sobel, and Daniela K. Rosner. 2019. Managerial Visions: Stories of Upgrading and Maintaining the Public Restroom with IoT. Paper 493, 15 pages. https://doi.org/10.1145/3290605.3300723 – Through interviews, participant observations, and analysis of media materials, Fox et al. investigate managerial labor in regulating access to public bathroom resources. They craft a story of regulation (in a broad sense), about how the bathroom’s management is entangled among local politics and on-the-ground moral beliefs, corporate values, imagined future efficiencies through technology, and strategic uses of interior and technological design. This entanglement allows for particular types of control, allowing some access to resources and making it harder for others.
  • William Gaver, Andy Boucher, Michail Vanis, Andy Sheen, Dean Brown, Liliana Ovalle, Naho Matsuda, Amina Abbas-Nazari, and Robert Phillips. 2019. My Naturewatch Camera: Disseminating Practice Research with a Cheap and Easy DIY Design. Paper 302, 13 pages. https://doi.org/10.1145/3290605.3300532 – Gaver et al. detail a DIY nature camera, shown in partnership with a BBC television series and built by over 1000 people. Interestingly, while similar tools could be used for citizen science efforts, the authors are clear that they are instead trying to create a type of public engagement with research that focuses on creating more intimate types of encounters, and engaging people with less technical expertise in making. The cameras help create intimate “encounters” with local wildlife (plus the paper includes some cute animal photos!).
  • Sandjar Kozubaev, Fernando Rochaix, Carl DiSalvo, and Christopher A. Le Dantec. 2019. Spaces and Traces: Implications of Smart Technology in Public Housing. Paper 439, 13 pages. https://doi.org/10.1145/3290605.3300669 — Kozubaev et al.’s work adds to a growing body of work questioning and reframing what the “home” means in relation to smart home technology. The authors conduct design workshops with residents (and some managers) in US public housing, providing insight into housing situations where (1) the “home” is not a single-family middle class grouping, and (2) the potential end users of smart home technologies may not have control or consent over the technologies used, and are already subject to various forms of state surveillance.
  • Shruthi Sai Chivukula, Chris Watkins, Lucca McKay, and Colin M. Gray. 2019. “Nothing Comes Before Profit”: Asshole Design In the Wild. Paper LBW1314, 6 pages. https://doi.org/10.1145/3290607.3312863 — This late breaking work by Chivukala et al investigates the /r/assholedesign subreddit to explore the concept of “asshole design,” particularly in comparison to the concept of “dark patterns.” They find that asshole design uses some dark pattern strategies, but that dark patterns tend to trick users into doing certain things, while asshole design often restricts uses of products and more often include non-digital artifacts. I think there may be an interesting future regulatory discussion about asshole design (and dark patterns). On one hand, one might consider whether dark pattern or asshole design practices might fit under the FTC’s definition of “unfair and deceptive practices” for possible enforcement action against companies. On the other, as some legislators are introducing bills to ban the use of dark patterns – it becomes very important to think carefully about how dark patterns are defined, and what might get included and excluded in those definitions; the way that this work suggests a set of practices related to, but distinct from, dark patterns could help inform future policy discussions.
Advertisements

Engaging Technologists to Reflect on Privacy Using Design Workbooks

This post summarizes a research paper, Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks, co-authored with Deirdre Mulligan, Ellen Van Wyk, John Chuang, and James Pierce. The paper will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on Monday November 5th (in the afternoon Privacy in Social Media session). Full paper available here.

Recent wearable and sensing devices, such as Google GlassStrava, and internet-connected toys have raised questions about ways in which privacy and other social values might be implicated by their development, use, and adoption. At the same time, legal, policy, and technical advocates for “privacy by design” have suggested that privacy should embedded into all aspects of the design process, rather than being addressed after a product is released, or rather than being addressed as just a legal issue. By advocating that privacy be addressed through technical design processes, the ability for technology professionals to surface, discuss, and address privacy and other social values becomes vital.

Companies and technologists already use a range of tools and practices to help address privacy, including privacy engineering practices, or making privacy policies more readable and usable. But many existing privacy mitigation tools are either deductive, or assume that privacy problems already known and well-defined in advance. However we often don’t have privacy concerns well-conceptualized in advance when creating systems. Our research shows that design approaches (drawing on a set of techniques called speculative design and design fiction) can help better explore, define, perhaps even anticipate, the what we mean by “privacy” in a given situation. Rather than trying to look at a single, abstract, universal definition of privacy, these methods help us think about privacy as relations among people, technologies, and institutions in different types of contexts and situations.

Creating Design Workbooks

We created a set of design workbooks — collections of design proposals or conceptual designs, drawn together to allow designers to investigate, explore, reflect on, and expand a design space. We drew on speculative design practices: in brief, our goal was to create a set of slightly provocative conceptual designs to help engage people in reflections or discussions about privacy (rather than propose specific solutions to problems posed by privacy).

A set of sketches that comprise the design workbook

Inspired by science fiction, technology research, and trends from the technology industry, we created a couple dozen fictional products, interfaces, and webpages of biosensing technologies, or technologies that sense people. These included smart camera enabled neighborhood watch systems, advanced surveillance systems, implantable tracking devices, and non-contact remote sensors that detect people’s heartrates. In earlier design work, we reflected on how putting the same technologies in different types of situations, scenarios, and social contexts, would vary the types of privacy concerns that emerged (such as the different types of privacy concerns that would emerge if advanced miniatures cameras were used by the police, by political advocates, or by the general public). However, we wanted to see how non-researchers might react to and discuss the conceptual designs.

How Did Technologists-In-Training View the Designs?

Through a series of interviews, we shared our workbook of designs with masters students in an information technology program who were training to go into the tech industry. We found several ways in which they brought up privacy-related issues while interacting with the workbooks, and highlight three of those ways here.

TruWork — A product webpage for a fictional system that uses an implanted chip allowing employers to keep track of employees’ location, activities, and health, 24/7.

First, our interviewees discussed privacy by taking on multiple user subject positions in relation to the designs. For instance, one participant looked at the fictional TruWork workplace implant design by imagining herself in the positions of an employer using the system and an employee using the system, noting how the product’s claim of creating a “happier, more efficient workplace,” was a value proposition aimed at the employer rather than the employee. While the system promises to tell employers whether or not their employees are lying about why they need a sick day, the participant noted that there might be many reasons why an employee might need to take a sick day, and those reasons should be private from their employer. These reflections are valuable, as prior work has documented how considering the viewpoints of direct and indirect stakeholders is important for considering social values in design practices.

CoupleTrack — an advertising graphic for a fictional system that uses an implanted chip for people in a relationship wear in order to keep track of each other’s location and activities.

A second way privacy reflections emerged was when participants discussed the designs in relation to their professional technical practices. One participant compared the fictional CoupleTrack implant to a wearable device for couples that he was building, in order to discuss different ways in which consent to data collection can be obtained and revoked. CoupleTrack’s embedded nature makes it much more difficult to revoke consent, while a wearable device can be more easily removed. This is useful because we’re looking for ways workbooks of speculative designs can help technologists discuss privacy in ways that they can relate back to their own technical practices.

Airport Tracking System — a sketch of an interface for a fictional system that automatically detects and flags “suspicious people” by color-coding people in surveillance camera footage.

A third theme that we found was that participants discussed and compared multiple ways in which a design could be configured or implemented. Our designs tend to describe products’ functions but do not specify technical implementation details, allowing participants to imagine multiple implementations. For example, a participant looking at the fictional automatic airport tracking and flagging system discussed the privacy implication of two possible implementations: one where the system only identifies and flags people with a prior criminal history (which might create extra burdens for people who have already served their time for a crime and have been released from prison); and one where the system uses behavioral predictors to try to identify “suspicious” behavior (which might go against a notion of “innocent until proven guilty”). The designs were useful at provoking conversations about the privacy and values implications of different design decisions.

Thinking About Privacy and Social Values Implications of Technologies

This work provides a case study showing how design workbooks and speculative design can be useful for thinking about the social values implications of technology, particularly privacy. In the time since we’ve made these designs, some (sometimes eerily) similar technologies have been developed or released, such as workers at a Swedish company embedding RFID chips in their hands, or Logitech’s Circle Camera.

But our design work isn’t meant to predict the future. Instead, what we tried to do is take some technologies that are emerging or on the near horizon, and think seriously about ways in which they might get adopted, or used and misused, or interact with existing social systems — such as the workplace, or government surveillance, or school systems. How might privacy and other values be at stake in those contexts and situations? We aim for for these designs to help shed light on the space of possibilities, in an effort to help technologists make more socially informed design decisions in the present.

We find it compelling that our design workbooks helped technologists-in-training discuss emerging technologies in relation to everyday, situated contexts. These workbooks don’t depict far off speculative science fiction with flying cars and spaceships. Rather they imagine future uses of technologies by having someone look at a product website, or a amazon.com page or an interface and thinking about the real and diverse ways in which people might experience those technology products. Using these techniques that focus on the potential adoptions and uses of emerging technologies in everyday contexts helps raise issues which might not be immediately obvious if we only think about positive social implications of technologies, and they also help surface issues that we might not see if we only think about social implications of technologies in terms of “worst case scenarios” or dystopias.

Paper Citation:

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 111 (December 2017), 26 pages. DOI: https://doi.org/10.1145/3134746


This post is crossposted with the ACM CSCW Blog

Exploring Implications of Everyday Brain-Computer Interface Adoption through Design Fiction

This blog post is a version of a talk I gave at the 2018 ACM Designing Interactive Systems (DIS) Conference based on a paper written with Nick Merrill and John Chuang, entitled When BCIs have APIs: Design Fictions of Everyday Brain-Computer Interface Adoption. Find out more on our project page, or download the paper: [PDF link] [ACM link]

In recent years, brain computer interfaces, or BCIs, have shifted from far-off science fiction, to medical research, to the realm of consumer-grade devices that can sense brainwaves and EEG signals. Brain computer interfaces have also featured more prominently in corporate and public imaginations, such as Elon Musk’s project that has been said to create a global shared brain, or fears that BCIs will result in thought control.

Most of these narratives and imaginings about BCIs tend to be utopian, or dystopian, imagining radical technological or social change. However, we instead aim to imagine futures that are not radically different from our own. In our project, we use design fiction to ask: how can we graft brain computer interfaces onto the everyday and mundane worlds we already live in? How can we explore how BCI uses, benefits, and labor practices may not be evenly distributed when they get adopted?

Continue reading →

Interrogating Biosensing Privacy Futures with Design Fiction (video)

 

I presented this talk in November 2017, at the Berkeley I School PhD Research Reception. The talk discusses findings from 2 of our papers:

Richmond Y. Wong, Ellen Van Wyk and James Pierce. (2017). Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS ’17). https://escholarship.org/uc/item/7r229796

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce and John Chuang. (2017). Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proceedings of the ACM Human Computer Interaction (CSCW 2018 Online First). 1, 2, Article 111 (November 2017), 27 pages. https://escholarship.org/uc/item/78c2802k

More about this project and some of the designs can be found here: biosense.berkeley.edu/projects/sci-fi-design-fiction/

Framing Future Drone Privacy Concerns through Amazon’s Concept Videos

This blog post is a version of a talk that I gave at the 2016 4S conference and describes work that has since been published in an article in The Journal of Human-Robot Interaction co-authored with Deirdre Mulligan entitled “These Aren’t the Autonomous Drones You’re Looking for: Investigating Privacy Concerns Through Concept Videos.” (2016). [Read online/Download PDF]

Today I’ll discuss an analysis of 2 of Amazon’s concept videos depicting their future autonomous drone service, how they frame privacy issues, and how these videos can be viewed in conversation with privacy laws and regulation.

As a privacy researcher with a human computer interaction background, I’ve become increasingly interested in how processes of imagination about emerging technologies contribute to narratives about the privacy implications of those technologies. Toda I’m discussing some thoughts emerging from a project looking at Amazon’s drone delivery service. In 2013, Amazon – the online retailer – announced Prime Air, a drone-based package delivery service. When they made their announcement, the actual product was not ready for public launch – and it’s still not available as of today. But what’s interesting is that at the time the announcement was made, Amazon also released a video that showed what the world might look like with this service of automated drones. And they released a second similar video in 2015. We call these videos concept videos.

Continue reading →

Using design fiction and science fiction to interrogate privacy in sensing technologies

This post is a version of a talk I gave at DIS 2017 based on my paper with Ellen Van Wyk and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy.  Find out more on our project page, or download the paper: [PDF link ] [ACM link]

Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.

Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.”  And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.

dhs slide.png

Image from publicintelligence.net. Note the middle bullet point in the middle column – “avoids all privacy issues.”

A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:

perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.

Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.

Continue reading →