I presented this talk in November 2017, at the Berkeley I School PhD Research Reception. The talk discusses findings from 2 of our papers:
Richmond Y. Wong, Ellen Van Wyk and James Pierce. (2017). Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS ’17). https://escholarship.org/uc/item/7r229796
Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce and John Chuang. (2017). Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proceedings of the ACM Human Computer Interaction (CSCW 2018 Online First). 1, 2, Article 111 (November 2017), 27 pages. https://escholarship.org/uc/item/78c2802k
More about this project and some of the designs can be found here: biosense.berkeley.edu/projects/sci-fi-design-fiction/
This blog post is a version of a talk that I gave at the 2016 4S conference and describes work that has since been published in an article in The Journal of Human-Robot Interaction co-authored with Deirdre Mulligan entitled “These Aren’t the Autonomous Drones You’re Looking for: Investigating Privacy Concerns Through Concept Videos.” (2016). [Read online/Download PDF]
Today I’ll discuss an analysis of 2 of Amazon’s concept videos depicting their future autonomous drone service, how they frame privacy issues, and how these videos can be viewed in conversation with privacy laws and regulation.
As a privacy researcher with a human computer interaction background, I’ve become increasingly interested in how processes of imagination about emerging technologies contribute to narratives about the privacy implications of those technologies. Toda I’m discussing some thoughts emerging from a project looking at Amazon’s drone delivery service. In 2013, Amazon – the online retailer – announced Prime Air, a drone-based package delivery service. When they made their announcement, the actual product was not ready for public launch – and it’s still not available as of today. But what’s interesting is that at the time the announcement was made, Amazon also released a video that showed what the world might look like with this service of automated drones. And they released a second similar video in 2015. We call these videos concept videos.
Continue reading →
This post is a version of a talk I gave at DIS 2017 based on my paper with Ellen Van Wyk and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy. Find out more on our project page, or download the paper: [PDF link ] [ACM link]
Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.
Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.” And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.
A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:
perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.
Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.
Continue reading →
This is part 3 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 1, part 2, or find out more about the project on the project page or download the full paper.
After doing a close reading and analyzing the concept videos for Google Glass (a pair of glasses with a heads up display) and Microsoft HoloLens (a pair of augmented reality goggles), we also looked at media reaction to these videos and these products’ announcements.
After both concept videos were released, media authors used the videos as a starting point to further imagine the future world with Glass and HoloLens, and the implications of living in those worlds. Yet they portrayed the future in two different ways: some discussed the future by critiquing the world depicted in the companies’ concept videos, while others accepted the depicted worlds. We distinguish between these two orientations, terming them speculative and anticipatory.
Continue reading →
This is part 2 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 1, part 3, or find out more about the project on the project page or download the full paper.
In this post, I walk through our close reading of the Glass and HoloLens concept videos and how they imagine potential futures. I then discuss how this analysis can be used to think about surveillance issues and other values associated with these representations of the future.
Google Glass Concept Video
Google’s concept video “Project Glass: One Day…” was released on April 4, 2012. The video portrays a day in the life of a male Glass user, as he makes his way around New York City. The video follows a single wearer throughout the day in New York City from when he wakes up until sunset. This video is shot entirely in a first person point of view, putting the video viewer in the place of a person wearing Glass.
Continue reading →
This is part 1 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 2, part 3, or find out more about the project on the project page or download the full paper.
So What is a Concept Video?
I am defining concept video as a video created by a company, showing a new novel device or product that is not yet available for public purchase, though it might be in a few years. Concept videos depict what the world might be like if that device or product exists, and how people might interact with it or use it. An early example is Apple’s Knowledge Navigator video, while more contemporary examples include Amazon’s Prime Air video, Google’s Glass video, and Microsoft’s HoloLens video. (I’ll take a closer look at the latter two in a following blog post). Concept videos embed a vision about the social and technical future of computing: how computing will be done, for whom, by what means, and what the norms of that world will be.
Continue reading →