Glass Enterprise Edition Doesn’t Seem So Creepy

Google Glass has returned — as Glass Enterprise Edition. The company’s website suggests that it can be used in professional settings–such as manufacturing, logistics, and healthcare — for specific work applications, such as accessing training videos, annotated images, handsfree checklists, or sharing your viewpoint with an expert collaborator. This is a very different imagined future with Glass than in the 2012 “One Day” concept video where a dude walks around New York City taking pictures and petting dogs. In fact, the idea of using this type of product in a professional working space, collaborating with experts from your point of view sounds a lot like the original Microsoft HoloLens concept video (mirror).

This is not to say one company followed or copied another (and in fact Hololens’ more augmented-reality-like interface and Glass’ more heads-up-display-like interface will likely be used for different types of applications. It is, however, a great example of how a product’s creepiness is partly related to whether it’s envisioned as a device to be used in constrained contexts or not.  In a great opening line which I think sums this well,  Levi Sumagaysay at Silicon Beat says:

Now Google Glass is productive, not creepy.

As I’ve previously written with Deirdre Mulligan [open access version] about the future worlds imagined by the original video presentations of Glass and HoloLens, Glass’ original portrayal of being always-on (and potentially always recording), invisible to others, taking information from one social context and using it in another, used in public spaces, made it easier to see it as a creepy and privacy-infringing device. (It didn’t help that the first Glass video also only showed the viewpoint of a single imagined user, a 20-something-year-old white man). Its goal seemed to be to capture information about a person’s entire life — from riding the subway to getting coffee with friends, to shopping, to going on dates. And a lot of people reacted negatively to Glass’ initial explorer edition, with Glass bans in some bars and restaurants, campaigns against it, and the rise of the colloquial term “glasshole.” In contrast, HoloLens was depicted as a very visible and very bulky device that can be easily seen, and its use was limited to a few familiar, specific places and contexts — at work or at home, so it’s not portrayed as a device that could record anything at any time. Notably, the HoloLens video also avoided showing the device in public spaces. HoloLens was also presented as a productivity tool to help complete specific tasks in new ways (such as CAD, helping someone complete a task by sharing their point of view, and the ever exciting file sharing), rather than a device that could capture everything about a user’s life. And there were few public displays of concern over privacy. (If you’re interested in more, I have another blog entry with more detail). 

Whether explicit or implicit, the presentation of Glass Enterprise Edition seems to recognize some of the lessons about constraining the use of such an expansive set of capabilities to particular contexts and roles. Using Glass’ sensing, recording, sharing, and display capabilities within the confines of professionals doing manufacturing, healthcare, or other work on the whole helps position the device as something that will not violate people’s privacy in public spaces. (Though it is perhaps still to be seen what types of privacy problems related to Glass will emerge in workplaces, and how those might be addressed through design, use rules, training, an so forth). What is perhaps more broadly interesting is how the same technology can take on different meanings with regards to privacy based on how it’s situated, used, and imagined within particular contexts and assemblages.

Advertisements

Framing Future Drone Privacy Concerns through Amazon’s Concept Videos

This blog post is a version of a talk that I gave at the 2016 4S conference and describes work that has since been published in an article in The Journal of Human-Robot Interaction co-authored with Deirdre Mulligan entitled “These Aren’t the Autonomous Drones You’re Looking for: Investigating Privacy Concerns Through Concept Videos.” (2016). [Read online/Download PDF]

Today I’ll discuss an analysis of 2 of Amazon’s concept videos depicting their future autonomous drone service, how they frame privacy issues, and how these videos can be viewed in conversation with privacy laws and regulation.

As a privacy researcher with a human computer interaction background, I’ve become increasingly interested in how processes of imagination about emerging technologies contribute to narratives about the privacy implications of those technologies. Toda I’m discussing some thoughts emerging from a project looking at Amazon’s drone delivery service. In 2013, Amazon – the online retailer – announced Prime Air, a drone-based package delivery service. When they made their announcement, the actual product was not ready for public launch – and it’s still not available as of today. But what’s interesting is that at the time the announcement was made, Amazon also released a video that showed what the world might look like with this service of automated drones. And they released a second similar video in 2015. We call these videos concept videos.

Continue reading →

Using design fiction and science fiction to interrogate privacy in sensing technologies

This post is a version of a talk I gave at DIS 2017 based on my paper with Ellen Van Wyk and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy.  Find out more on our project page, or download the paper: [PDF link ] [ACM link]

Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.

Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.”  And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.

dhs slide.png

Image from publicintelligence.net. Note the middle bullet point in the middle column – “avoids all privacy issues.”

A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:

perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.

Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.

Continue reading →

Reflections on CSCW 2016

CSCW 2016 (ACM’s conference on Computer Supported Cooperative Work and Social Computing) took place in San Francisco last month. I attended (my second time at this conference!), and it was wonderful meeting new and old colleagues alike. I thought I would share some reflections and highlights that I’ve had from this year’s proceedings.

Privacy

Many papers addressed issues of privacy from a number of perspectives. Bo Zhang and Heng Xu study how behavioral nudges can shift behavior toward more privacy-conscious actions, rather than merely providing greater information transparency and hoping users will make better decisions. A nudge showing users how often an app accesses phone permissions made users feel creepy, while a nudge showing other users’ behaviors reduced users’ privacy concerns and elevated their comfort. I think there may be value in studying the emotional experience of privacy (such as creepiness), in addition to traditional measurements of disclosure and comfort. To me, the paper suggests a further ethical question about the use of paternalistic measures in privacy. Given that nudges could affect users’ behaviors both positively and negatively toward an app, how should we make ethical decisions when designing nudges into systems?

Continue reading →