This blog post is a version of a talk that I gave at the 2016 4S conference and describes work that has since been published in an article in The Journal of Human-Robot Interaction co-authored with Deirdre Mulligan entitled “These Aren’t the Autonomous Drones You’re Looking for: Investigating Privacy Concerns Through Concept Videos.” (2016). [Read online/Download PDF]
Today I’ll discuss an analysis of 2 of Amazon’s concept videos depicting their future autonomous drone service, how they frame privacy issues, and how these videos can be viewed in conversation with privacy laws and regulation.
As a privacy researcher with a human computer interaction background, I’ve become increasingly interested in how processes of imagination about emerging technologies contribute to narratives about the privacy implications of those technologies. Toda I’m discussing some thoughts emerging from a project looking at Amazon’s drone delivery service. In 2013, Amazon – the online retailer – announced Prime Air, a drone-based package delivery service. When they made their announcement, the actual product was not ready for public launch – and it’s still not available as of today. But what’s interesting is that at the time the announcement was made, Amazon also released a video that showed what the world might look like with this service of automated drones. And they released a second similar video in 2015. We call these videos concept videos.
Continue reading →
This post is a version of a talk I gave at DIS 2017 based on my paper with Ellen Van Wyk and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy. Find out more on our project page, or download the paper: [PDF link ] [ACM link]
Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.
Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.” And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.
A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:
perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.
Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.
Continue reading →
CSCW 2016 (ACM’s conference on Computer Supported Cooperative Work and Social Computing) took place in San Francisco last month. I attended (my second time at this conference!), and it was wonderful meeting new and old colleagues alike. I thought I would share some reflections and highlights that I’ve had from this year’s proceedings.
Many papers addressed issues of privacy from a number of perspectives. Bo Zhang and Heng Xu study how behavioral nudges can shift behavior toward more privacy-conscious actions, rather than merely providing greater information transparency and hoping users will make better decisions. A nudge showing users how often an app accesses phone permissions made users feel creepy, while a nudge showing other users’ behaviors reduced users’ privacy concerns and elevated their comfort. I think there may be value in studying the emotional experience of privacy (such as creepiness), in addition to traditional measurements of disclosure and comfort. To me, the paper suggests a further ethical question about the use of paternalistic measures in privacy. Given that nudges could affect users’ behaviors both positively and negatively toward an app, how should we make ethical decisions when designing nudges into systems?
Continue reading →