Having earlier posted some notes from the pre-conference Doctoral Colloquium and Hybrid Design Practices workshop, I've finally gotten around to compiling - and augmenting - some notes from the main technical program of UbiComp 2009, the 11th International Conference on Ubiquitous Computing, held at the Disney Yacht Club in Orlando, Florida, last week. Before delving into my personal and rather idiosyncratic recollections from and ruminations about the conference, I want to note that there are a variety of other sources of social media around the web tagged with "ubicomp2009", including presentations on SlideShare, photos at Flickr and messages on Twitter (archives of which may be more reliably found on Twubs). I also want to note that I missed parts of some sessions, and missed both Saturday morning sessions entirely, so most gaps are due to nonattendance rather than disinterest.
Sumi Helal, General Chair of the conference, began the opening remarks [and has shared the slides from the opening remarks] by welcoming us to the conference, thanking all the volunteers, and reporting on some statistics about attendance at the conference: 255 people registered for the conference, of whom 116 were students. The Program Co-Chairs, Hans Gellersen and Sunny Consolvo, then shared some further statistics: 251 submissions (180 ten-page "full papers" and 71 four-page "notes") - the highest ever submitted to a UbiComp conference (!) - of which 31 were accepted (25 full papers and 6 notes), yielding an acceptance rate of 13.8% for papers, 8.5% for notes, and 12.4% overall.
Henry Tirri delivered the opening keynote, "Poor Man's Ubicomp", reviewing the past, present and future prospects for computing in various form factors, and finishing with an invitation to focus on how the mobile computer (aka mobile phone) can have greater impact on people in emerging economies. Henry highlighted 5 important dimensions in which mobile computing is new, that I will characterize through 5 C's (harking back to my own time at Nokia, working on a project with 3 C's):
- connectivity: mobile computers have at least one radio, and there is a global wireless infrastructure to support them
- context: they have an increasing number and variety of sensors (microphone, camera, accelerometer, light sensor, Bluetooth, GPS, WiFi and [of course] cellular radio)
- continuity: they are always with us (although a paper presented at UbiComp 2006 revealed that they may sometimes be farther than we think)
- consumption: resource tradeoffs are becoming more important [again], e.g., it may be cheaper to compute a bit than send a bit
- copiousness: there are 1-2 orders of magnitude more mobile computers than desktop computers
Henry presented a number of emerging "supersensing" capabilities, as well as some projects / applications focusing on three primary areas: traffic (e.g., automatic alternate routing), health (e.g., tracking influenza outbreaks) and entertainment (e.g., mobile games and social tagging ... which, of course, can be combined in some cases). However, the part I most enjoyed was his discussion of the ways that mobile computers can help those most in need, e.g., "the next 1 billion" people in emerging economies, or people coping with natural - or unnatural - disasters ... what he referred to as "black swans". Henry's reference to a recent special report in The Economist on telecoms in emerging markets: "Mobile Marvels" - combined with a pre-conference workshop on Globicomp and a number of papers later in the program - suggests that this is an area that is receiving increasing, and well-deserved, attention.
Clara Mancini presented "From Spaces to Places: Emerging Contexts in Mobile Privacy", in which she and her colleagues found that it was useful to augment experience sampling methods - wherein users are periodically prompted to provide information about their current or recent activities - by adding a user-specified memory phrase to mark their reported experiences. In an ethnographic study of 6 users of the Facebook iPhone application, they found that the use of these user-generated context cues during followup interviews helped reveal a variety of categories of privacy-related boundaries in the use of this popular mobile application - personal policy, etiquette, proxemic and aggregation - as well as a layer of socio-cultural subjective meaning of a location's function.
Irina Shklovski and Janet Vertesi presented "The Commodification of Location: Dynamics of Power in Location-Based Systems", in which they reported that the GPS ankle bracelets that must be worn by all convicted sex offenders in California to track their movements are resulting in increased workloads, and [possibly] less effective monitoring, for the parole supervisors who must now incorporate the huge volume of GPS tracking data into their work processes. By having to devote more time to virtual tracking, the parole officers have less time to devote to physical tracking (direct contacts with the parolees) - which is often more effective in policing their movements - and are reporting that their caseloads have shrunk from 80 to 40, and the number of cases they can effectively manage amid the deluge of data is probably closer to 20 (!). Among the new vocabulary terms I acquired during the talk was commodity fetishism, a Marxist concept in which the value of an object, once determined through the social relationship between the producer and consumer of the object, is entirely determined by other means; in this case, the commodity is location, which was once determined through direct communication between parole officers and their parolees, and is now determined through GPS tracking technology that offers questionable "value" (in this context).
Donnie Kim presented "Discovering Semantically Meaningful Places from Pervasive RF-Beacons" in which he and his colleagues improved the accuracy of tracking short, frequently visited places via an algorithm (PlaceSense) that imposes a moving window or buffer on the stream of sensed RF signals, resulting in a more stable detection of entering and leaving events. In two studies using a Nokia N95 as the location tracking device - one involving scripted tours (10 frequented places, 10 visits for each of three durations: 8, 10 and 15 minutes), another involving 4 weeks of real-life data - they demonstrated a significant improvement in the precision and recall of visited places using their algorithm vs. previous algorithms.
Andreas Bulling presented "Eye Movement Analysis for Activity Recognition", in which he and his colleagues used electrooculography (EOG) for sensing certain eye movements - saccades [another new vocabulary term], fixations and blinks - via skin electrodes (or, as Andreas put it, "ECG for the eyes"). They developed a wordbook encoding of 24 eye movements, and used a sliding window for detecting patterns, trained a linear support vector machine (SVM), which distinguished among 6 different activities of a user at a computer terminal (with 5 minute durations) - copy, read, write, video, browse, NULL - with 70.5% recall and 76.1% precision ... though I'm not sure how this compares to other approaches, nor what would represent "good enough" in this context.
Michael Buettner presented "Recognizing Daily Activities with RFID-Based Sensors" [I can't find a link for Michael or the paper], in which he distinguished three approaches to activity recognition: location-based (where you are), kinematics-based (how you move), object-use based (what you use). In the paper, he and his colleagues adopted the object-use based approach, and compared the accuracy of the Intel WISP (Wireless Identification & Sensing Platform) - which is powered by a 3-dimensional RFID tag that receives energy transmitted via an RFID interrogator rather than a battery (shown left) - and the iBracelet wrist-worn RFID reader for recognizing activities based on the use of 25 tagged objects throughout various rooms in an apartment. In a study involving 10 subjects performing 14 tasks, they found that the WISP achieved 90% precision and 91% recall, while the iBracelet achieved 95% precision and 60% recall.
Unfortunately, on Day 2, I arrived rather late to the first session, which was composed of a series of shorter presentations (15 minutes) on shorter papers or "notes" (4 pages). I did see - and have some notes on - the last two presentations, and among the many reasons I'm sad I missed so much of this session is that the session chair, John Krumm - one of my favorite speakers (and people) in the UbiComp community - introduced each paper with a joke that was supplied by the author(s). It was a great way to inject some levity - and increase attention - before the start of each talk. John later told me that he'd learned that a dose of humor primes the reception and recall of the next few minutes of a presentation (I've since found a few online resources devoted to humor and presentations, including a research study that suggests that "[t]he effect of the cues produced by humor is interpreted as creating a more distinctive and thus more accessible memory"). I'll experiment below with inserting the jokes before my notes on each of the two presentations I saw from that session.
[Introductory joke: A dog sees a public display advertising for a place that sends dual-tone multi-frequency telegrams. The dog goes in and asks the telegraph operator to send a telegram that says, "Woof woof woof." The telegraph operator says, "There are only three woofs here. You could send another one for the same price." The dog replies, "But that would make no sense at all."]
David Dearman presented "BlueTone: A Framework for Interacting with Public Displays Using Dual-Tone Multi-Frequency through Bluetooth", in which users can pair their Bluetooth-enabled mobile phones with an appropriately configured public display by renaming their phone, and then use their phone as an input device - entering text, manipulating the cursor and/or selecting menu items - without having to download or install any special software on the phone. The display must have a Bluetooth adapter, and be running an EventServer, BluetoothScanner, DisplayClient and one or more DTMFReader processes. I didn't find the example shown during the presentation - manipulating a YouTube video - terribly compelling, but I imagine BlueTone could be very useful for some special-purpose, large [proactive] display applications that I've been involved with, e.g., the Context, Content and Community Collage at Nokia (which we presented at CSCW 2008) and CoCollage at Strands (which we presented at C&T 2009), and it would be a nice augmentation to the ProD Framework for Proactive Displays by Congleton, et al. (presented at UIST 2008).
[Introductory joke: An upset woman carried her baby out of the research lab and told the man standing there, "That blended public display just told me that my baby is ugly." The man says, "I think you should tell that blended public display that you’re offended, and if you like, I’ll hold your monkey for you."]
Joe Finney presented "Toward Emergent Technology for Blended Public Displays", in which espoused a vision in which every pixel on a display is an intelligent, self-organizing device working with others to form a coherent image, enabling any collection of light sources to become a coherent display surface and - ultimately - to provide for pour on (or spray on) displays. The Firefly system is a step in this direction, consisting of a collection of individually addressable lighting elements (LEDs with microcontrollers) and a network of control elements for creating large scale displays. The system was used to create a 5m x 7.5m display of 3000 lights (consuming only 300w of power) during the Christmas 2007 season at Lancaster City Centre. Given the example Joe presented near the start of his talk - a large BBC screen in Birminham, UK, that generated large-scale user (or viewer) acceptance issues - and other examples of objections to large public displays in Los Angeles and other U.S. cities (Joe mentioned that over 709,900 such displays had been deployed in the U.S. in 2008 alone), I hope that human-centered design practices will keep pace with technological advancements that make it easier to deploy large public displays.
[No more jokes :-( ... on to the next session.]
Tamara Denning presented "A Spotlight on Security and Privacy Risks with Future Household Robots: Attacks and Lessons" [slides (PDF)], which almost seemed like a work of science fiction, describing how household robots - such as Rovio ("a WiFi enabled mobile webcam") or Spykee ("the WiFi Spy Robot") - could be hacked via unprotected - or underprotected - wireless networks, and used for eavesdropping, minor vandalization, tripping up or simply confusing the hapless human residents ... or band together with other hacked household robots to create larger scale mischief and/or destruction ... creating a whole new dimension of cyber-bullying ... and/or an evil new twist to crowdsourcing. To illustrate the risks, she showed a video of a robot stealing keys that had fallen on the floor (
if I can find it, I'll add a link, or embed it here update: video of the remote-controlled multi-robot key-stealing attack now embedded below). One of the issues she raised was that some of these robots are designed for children ... and one can imagine "Trojan robots" given as gifts.
Tim Kindberg presented "Authenticating Ubiquitous Services: A Study of Wireless Hotspot Access", highlighting the risks of phishing scams via WiFi hotspots, in which unsuspecting visitors to public and semi-public places might be lured into connecting to the Internet via a rogue wireless access point. Tim and his colleagues investigated three different "physical linkage" vehicles through which people could be notified of how to connect to a wireless access point in a cafe - a leaflet on a table in the cafe, a printed poster on a wall or a plasma display mounted on a wall - and three different "virtual linkage" mechanisms through which access to the network could be gained - password, interlock and synchronization. They found that the perceived strength of physical linkage (bolted to a wall vs. loose on a table) and virtual linkage (number of transactions or steps) were associated with a higher confidence in the security of the access point. They also found that usability was a significant factor among their participants (customers of the cafe). Based on my personal interactions with dozens of cafe owners and staff about the adoption and use of technology (for CoCollage), I suspect that adding any extra complexity to the wireless access process - which may increase questions, requests for assistance or other demands on the staff - would be resisted or rejected by most owners (even more than their customers).
Susan Wyche presented "Broadening UbiComp’s Vision: An Exploratory Study of Charismatic Pentecostals and Technology Use in Brazil" [and has since posted her slides (!)], in which she described some of the ways that Pentecostals perceive and use information and communication technologies (ICTs) as part of their religious beliefs and practices. Charismatic Pentecostals - who now make up 28% of the Latin American population - believe in biblical inerrancy and miracles, and report experiencing both the divine and the demonic through ICTs (an example of which is shown to the right ... and I'll embed a close-captioned YouTube video by Pastor Josue Yrion on Satanic Disney [which I discovered after the talk], in which he describes an incredible array of purportedly hidden agendas in various Disney movies that violate one or more tenets of the pastor's belief system, below).
Susan argued that in order to be truly global, ubiquitous computing needs to take account of non-normative belief and value systems outside of the Global North. I agree that it is important to take account of such systems, and that we ought to think carefully about what kinds of practices we want to support. As I mentioned in my notes from CSCW 2006, I think that many of the examples that Susan had earlier presented in a paper on "Technology in Spiritual Formation: An Exploratory Study of Computer Mediated Religious Communications" offer some intriguing insights into design issues that include not only the religious / secular spectrum, but power paradigms such as "command and control" vs. "listen and participate". Toward the end of her talk (slide 17, to be precise), Susan posed a couple of provocative questions:
- What if individuals want to use ICTs to support activities that contradict some technology developer’s personal value systems?
- Whose user needs are marginalized at the expense of furthering a western normative agenda about appropriate ICT use?
I think it's important to be sensitive to other value systems, and to be aware of our own [often implicit] agendas, and I was fascinated to learn more about the alternate realities of Charismatic Pentecostalism ... but I found myself thinking about a dark side of non-normative western belief systems - a pervasive system of belief in Africa involving HIV/AIDS, the virgin cure and infant rape. I don't believe that Susan, her co-authors or any others in the ubiquitous computing community would propose supporting this belief system, but I do believe that, generally speaking, we ought to design ICTs that promote more rational practices ... or at least, given some of the more playful applications I saw at the conference and elsewhere, practices that are not considered harmful (within the context of our western / Global North value systems).
Nithya Sambasivan and Nimmi Rangaswamy co-presented "Ubicomp4D: Infrastructure and Interaction for International Development—the Case of Urban Indian Slums" [slides], which offered another opportunity to learn more about the practices - and predicaments - of large groups of people outside the Global North. They defined UbiComp4D as the application of ubiquitous computing to address poverty-related issues (riffing on ICT4D, Information and Communication Technologies 4 Development). After outlining some of the characteristics of "the slum ecologies" in Mumbai and Bangalore, they presented three vignettes highlighting the ways ICTs - mobile phones, televisions and DVD players - are used to support family ties, work and entertainment. They then recommended a number of design considerations: look for opportunities for inserting people into the loop(s), design for failures and other disruptions in the ecosystem, accommodate varying levels of literacy (e.g., support oral or auditory information exchange), and explore ways that ICTs can enhance and/or interlink existing technologies and be appropriated in new ways ... and places. Nokia phones played a prominent role in these ecologies, reminding me of some of the [other] ways that Nokia helps empower people through mobile technologies in developing regions that I'd discovered in preparing a presentation for a Pop!Tech 2007 session on "The Future of Mobility". Toward the end of the talk, the authors suggested that informal community gathering spots may offer opportunities for large public displays, reminding me of the Big Board public display application and associated SnapAndGrab interactions that Gary Marsden and his colleagues have worked on in South Africa.
David Nguyen presented "Encountering SenseCam: Personal Recording Technologies in Everyday Life" in which he and his colleagues conducted experiments to determine how people who encounter a SenseCam - a wearable device with a camera and sensors that can take periodic photos of the wearer's environment (including the people in that environment) - feel about the prospect of being passively photographed by the device. The 19 SenseCam wearers in 4 locations across 2 countries encountered 686 people, of whom 413 were willing to take a survey, and 15 of them were interviewed. Among the issues that arose were the quality and quantity of photos (lower = more acceptable), visual vs. audio recording (audio recording = bad), and different stages at which they may want to be asked for permission - e.g., before the SenseCam takes a photo and/or before the photo is shared. Participants seldom reported being willing or able to take action about the use of SenseCam (despite their level of discomfort), and there seemed to be interesting differences among different populations, i.e., people in the U.S. were generally more concerned about attractiveness and image management, while people in the UK were generally more aware of the issues (perhaps because of the greater prevalence of CCTV cameras in that country [which may soon be used in a "game" - Internet Eyes - in which voyeurs viewers can monitor CCTV cameras and earn prizes by reporting crimes, possibly leading to greater success than Texas Virtual Border Watch Program]).
What was particularly interesting about this work - for me (aside from the fact that several of the authors are close friends) - is that one of the intended uses of SenseCam (a relatively uncommon camera platform) is to assist people with physical or mental disabilities, and yet throughout the conference, I was encountering similar issues among able-bodied people in response to more common types of cameras (including cameraphones and video cameras), suggesting that these privacy concerns are far more prevalent than might be anticipated. I've posted several photos from the conference on Flickr, and I've labeled some of them with the names of the people who are in them. Some of the photos from Mario Romero's fabulous Flickr set for UbiComp 2009 that I've used here (with permission) have people's names embedded in the photos themselves, e.g., the ones of Sumi and Henry near the top of this post. I wonder how many of these people would object to these labeling practices ... um, or how many of them might object to my blogging about them.
Michael Weber moderated a panel on "Achievements, challenges, obstacles, and perspectives – where shall we be in another decade of ubicomp research". The slides from the four panelists have been posted to SlideShare [thanks!], so I will restrict my notes here to a single sentence about each of their opening statements. James Scott [slides] complained that much of the work at UbiComp is carried only far enough to enable writing a paper (or three) about it, and suggested ways we might encourage larger-scale, longer-term deployments. Tim Kindberg [slides] asked "what kind of tribe do we want to be?", positioned ubicomp research somewhere in the middle of the "magic" of custom experiences (represented by Mickey Mouse) and the sea of common APIs and platforms (represented by City Mouse) and suggested we collaborate with other tribes. Shwetak Patel [slides] proposed that we push beyond the lab, toward commercialization (engaging with other tribes, such as entrepreneurs, venture capitalists and other businessfolk), which may eventually loop back by providing "off-the-shelf" technologies for future ubicomp research[ers] to use. Jeff Hightower [slides] rhetorically asked "what are our widely adopted Ubicomp success stories?" and then provocatively answered "None!", but he did note that we are only 10 years "young", and despite his indictment, he believes there may be opportunities for future wide adoption success stories in persuasive technologies and life-assistive solutions. If I were to summarize the common themes in the panel, I would say that to achieve success in UbiComp, it takes a village of interdisciplinary people and tribes.
I missed the morning sessions of the next (and last) day - it was the only day I could visit a Disney World theme park for free, and I'll post a separate entry about my semi-structured field exploration of Epcot Center that day - but I did make it back in time for the closing keynote by Sandy Pentland, "Honest Signals from Reality Mining", based on a similarly titled book. Due to time constraints, Sandy condensed his talk down to 30 minutes, but I'll embed a 50-minute video of a similar talk he gave at Google below (there is also an 8-minute version). Sandy and his colleagues have mined the data from mobile phones, wearable sensors (sociometric badges) and other devices to track - or infer - certain individual, group and organizational behavior patterns. He talked about neurophysiological systems and what they indicate about our internal states and how they influence behavior in others (this is more succinctly captured in the slide I fuzzily captured in the photo shown on the left, a clearer version of which can be found around the 8:00 mark in the 50-minute video). He also referenced work on task roles (giver, orienteer, follower) and social roles (protagonist, supporter, neutral, attacker) by Bales, and claimed that computers are as good at identifying these roles as people are.
Some of the most interesting applications of reality mining are in the workplace, where signals can be used to infer such things as face-to-face proximity, identification as peers and group affirmation, which have been shown to affect group interaction quality, common task performance and homogeneity of opinion. The feedback provided from these signals in a meeting context (via a Meeting Mediator device) can help participants recognize when one or more of them are dominating a discussion ... which may influence subsequent behavior by the participants. Noting a study that showed that increasing face-to-face cohesiveness can lead to a 10% increase in productivity, Sandy suggested that capturing and sharing these signals can have significant impact within an organization.
While I can imagine that measuring and showing visualizations of these signals can have positive impacts, I can also imagine unintended negative consequences. As a chronic loud mouth who frequently speaks up at meetings, I have sometimes spoken with people who have been more quiet in meetings in which we've jointly participated, and some have told me that they prefer to have one or more people play a more vocal role in meetings, as they tend to be more inclined to post-process the interactions and information shared during a meeting, and respond or [re]act more effectively afterward. The theory of multiple intelligences suggests [to me] that diversity in thinking and interaction styles can be a good thing for an organization, and the measurement and display of interaction patterns may produce a Hawthorne effect, encouraging more people to speak up - or pipe down - when that is not their natural style, which may ultimately yield suboptimal results.
Well, I've done my best to mine and synthesize some of the signals I detected at the conference. I want to finish off by thanking Sumi Helal for doing such a great job in organizing the conference, and thank all the other organizers, reviewers, authors, presenters and attendees for co-creating such an engaging experience!
[Update: The Miami Herald - the paper that Dave Barry calls "home" - published an article by technology reporter Bridget Carey about some of the demonstrations and posters, helping to fill a[nother] "gap" in my coverage of the conference: "Technological devices offer glimpse into future: Researchers from universities around the world gathered in Orlando last week to present technology that can better people's lives"; discovered via Chris Jablonski's ZDNet post, Ubicomp 2009 and the fusion of our digital and physical worlds]
[Update: videos from the UbiComp 2009 Video Program have been uploaded and are now available for viewing.]