Research

The Scientific Method: Cultivating Thoroughly Conscious Ignorance

Ignorance_HowItDrivesScience_StuartFirestein_coverStuart Firestein brilliantly captures the positive influence of ignorance as an often unacknowledged guiding principle in the fits and starts that typically characterize the progression of real science. His book, Ignorance: How It Drives Science, grew out of a course on Ignorance he teaches at Columbia University, where he chairs the department of Biological Sciences and runs a neuroscience research lab. The book is replete with clever anecdotes interleaved with thoughtful analyses - by Firestein and other insightful thinkers and doers - regarding the central importance of ignorance in our quests to acquire knowledge about the world.

Each chapter leads off with a short quote, and the one that starts Chapter 1 sets the stage for the entire book:

"It is very difficult to find a black cat in a dark room," warns an old proverb. "Especially when there is no cat."

He proceeds to channel the wisdom of Princeton mathematician Andrew Wiles (who proved Fermat's Last Theorem) regarding the way science advances:

It's groping and probing and poking, and some bumbling and bungling, and then a switch is discovered, often by accident, and the light is lit, and everyone says "Oh, wow, so that's how it looks," and then it's off into the next dark room, looking for the next mysterious black feline.

Firestein is careful to distinguish the "willful stupidity" and "callow indifference to facts and logic" exhibited by those who are "unaware, unenlightened, and surprisingly often occupy elected offices" from a more knowledgeable, perceptive and insightful ignorance. As physicist James Clerk Maxwell describes it, this "thoroughly conscious ignorance is the prelude to every real advance in science."

The author disputes the view of science as a collection of facts, and instead invites the reader to focus on questions rather than answers, to cultivate what poet John Keats' calls "negative capability": the ability to dwell in "uncertainty without irritability". This notion is further elaborated by philosopher-scientist Erwin Schrodinger:

In an honest search for knowledge you quite often have to abide by ignorance for an indefinite period.

PowerOfPullIgnorance tends to thrive more on the edges than in the centers of traditional scientific circles. Using the analogy of a pebble dropped into a pond, most scientists tend to focus near the site where the pebble is dropped, but the most valuable insights are more likely to be found among the ever-widening ripples as they spread across the pond. This observation about the scientific value of exploring edges reminds me of another inspiring book I reviewed a few years ago, The Power of Pull, wherein authors John Hagel III, John Seely Brown & Lang Davison highlight the business value of exploring edges: 

Edges are places that become fertile ground for innovation because they spawn significant new unmet needs and unexploited capabilities and attract people who are risk takers. Edges therefore become significant drivers of knowledge creation and economic growth, challenging and ultimately transforming traditional arrangements and approaches.

On a professional level, given my recent renewal of interest in the practice of data science, I find many insights into ignorance relevant to a productive perspective for a data scientist. He promotes a data-driven rather than hypothesis-driven approach, instructing his students to "get the data, and then we can figure out the hypotheses." Riffing on Rodin, the famous sculptor, Firestein highlights the literal meaning of "dis-cover", which is "to remove a veil that was hiding something already there" (which is the essence of data mining). He also notes that each discovery is ephemeral, as "no datum is safe from the next generation of scientists with the next generation of tools", highlighting both the iterative nature of the data mining process and the central importance of choosing the right metrics and visualizations for analyzing the data.

Professor Firestein also articulates some keen insights about our failing educational system, a professional trajectory from which I recently departed, that resonate with some growing misgivings I was experiencing in academia. He highlights the need to revise both the business model of universities and the pedagogical model, asserting that we need to encourage students to think in terms of questions, not answers. 

W.B. Yeats admonished that "education is not the filling of a pail, but the lighting of a fire." Indeed. TIme to get out the matches.

If_life_is_a_game_these_are_the_rules_large

On a personal level, at several points while reading the book I was often reminded of two of my favorite "life rules" (often mentioned in preceding posts) articulated by Cherie Carter-Scott in her inspiring book, If Life is a Game, These are the Rules:

Rule Three: There are no mistakes, only lessons.
Growth is a process of experimentation, a series of trials, errors, and occasional victories. The failed experiments are as much a part of the process as the experiments that work.

Rule Four: A lesson is repeated until learned.
Lessons will repeated to you in various forms until you have learned them. When you have learned them, you can then go on to the next lesson.

Firestein offers an interesting spin on this concept, adding texture to my previous understanding, and helping me feel more comfortable with my own highly variable learning process, as I often feel frustrated with re-encountering lessons many, many times:

I have learned from years of teaching that saying nearly the same thing in different ways is an often effective strategy. Sometimes a person has to hear something a few times or just the right way to get that click of recognition, that "ah-ha moment" of clarity. And even if you completely get it the first time, another explanation always adds texture.

My ignorance is revealed to me on a daily, sometimes hourly, basis (I suspect people with partners and/or children have an unfair advantage in this department). I have written before about the scope and consequences of others being wrong, but for much of my life, I have felt shame about the breadth and depth of my own ignorance (perhaps reflecting the insight that everyone is a mirror). It's helpful to re-dis-cover the wisdom that ignorance can, when consciously cultivated, be strength.

[The video below is the TED Talk that Stuart Firestein recently gave on The Pursuit of Ignorance.]

 

 


PRP, Regenokine & other biologic medicine treatments for joint & tendon problems

Science journalist Jonah Lehrer posted an interesting article last week about aging star athletes' embrace of biologic medicine, "Why Did Kobe Go to Germany? An aging star and the new procedure that could revolutionize sports medicine". The article describes Regenokine, a relatively new procedure for treating joint and tendon problems that sounds similar to the platelet rich plasma (PRP) treatment I underwent for my right elbow nearly 5 years ago. I have enjoyed a nearly full recovery from the pain and limitations of chronic elbow tendinosis that had plagued me on and off for several years prior to treatment, and I enjoyed reading about others' successful treatment experiences and some of the studies about treatment alternatives.

"Biologic medicine" treatments all engage the body in healing itself, typically involving the extraction, manipulation and re-injection of the patient's own blood or other bodily fluid. Regenokine treatment involves withdrawing a small sample of blood from the patient, heating it and then spinning it in a centrifuge to separate the constituent elements; the resulting yellow colored middle layer is then extracted and injected into the patient's problem area (e.g., the knee). PRP involves withdrawing blood and spinning it in a centrifuge, but does not involve heating, and - as the name suggests - the platelet-rich layer is extracted for injection. Bone marrow injections, involving stem cells, use a similar approach.

Regenokine_centrifuge

Unfortunately, the article reports that PRP, Regenokine and other "biologic medicine" treatments face special challenges in securing FDA approval:

The reason Kobe, A-Rod, and other athletes travel to Germany for their biologic treatments involves a vague FDA regulation that mandates that all human tissues (such as blood and bone marrow) can only be "minimally manipulated," or else they are classified as a drug and subject to much stricter governmental regulations. The problem, of course, is figuring out what "minimal" means in the context of biologics. Can the blood be heated to a higher temperature, as with Regenokine? Spun in a centrifuge? Can certain proteins be filtered out? Nobody knows the answer to these questions, and most American doctors are unwilling to risk the ire of regulators.

The article profiles athletes Kobe Bryant and Alex Rodriguez, as well as Regenokine treatment providers Dr. Peter Wehling (Dusseldorf, Germany) and Dr. Chris Renna (Lifespan Medicine, Dallas & Santa Monica) - who are also co-authors of the book End of Pain - and PRP treatment providers Dr. Stephen Sampson (Orthohealing Center & UCLA) and Dr. Allan Mishra (Apex PRP & Stanford), the doctor who treated my elbow.

Lehrer offers a balanced perspective, noting that while a few famous athletes appear to have experienced healing after biologic medicine treatments, there is - as yet - little supporting evidence from rigorous clinical trials, and so these could represent "the latest overhyped medical treatments for desperate athletes". A 2006 article co-authored by Mishra described a pilot study showing the effectiveness of PRP for chronic elbow tendinosis (the problem I was suffering from), and a 2010 article co-authored by Sampson described another pilot study showing the effectiveness of PRP on knee osteoarthritis. However a 2010 article reported on a Dutch study that showed no significant benefit of PRP over saline injections for chronic Achilles tendonopathy. Another Dutch study, involving a double-blind randomized trial of PRP with 230 patients has been completed, but it could be another several years before the results appear in a peer-reviewed medical journal. Mishra's blog includes a recent post referencing other studies supporting the effectiveness of PRP.

I don't know of any studies of Regenokine, but a 2008 pilot study of interleukin-1 receptor antagonist did not demonstrate significant benefit to treating knee osteoarthritis demonstrated "statistically significant improvement of KOOS [Knee injury and Osteoarthritis Outcome Score] symptom and sport parameters", and a 2009 study reports that Autologous conditioned serum (Orthokine) is an effective treatment for knee osteoarthritis. According to a December 2011 post about PRP and Regenokine in the Wordpress blog, Knee Surgery Newsletter (which offers no information about the author), Orthokine was the brand name under which Regenokine was previously marketed, and Regenokine and Orthokine are both brand names for interleukin receptor antagonist treatment.

The Lehrer article also highlights doubts - or what should be doubts - about the effectiveness of the traditional alternative to biologic medicine treatment - surgery - describing the results of a 2002 peer-reviewed study appearing in the New England Journal of Medicine, A Controlled Trial of Arthroscopic Surgery for Osteoarthritis of the Knee:

Consider an influential 2002 trial that compared arthroscopic surgery for knee osteoarthritis to a sham surgery, in which people were randomly assigned to have their knee cut open but without any additional treatment. (The surgeon who performed all the operations was the orthopedic specialist for an NBA team.) The data was clear: there was no measurable difference between those who received the real surgery and those who received the fake one.

As I've noted before in the PRP thread here on my blog, I'm not a medical expert, and I don't even follow the medical literature about PRP or other treatments with any regularity (I discovered this article because I follow @jonahlehrer on Twitter). I have enjoyed a complete recovery of functionality and nearly pain-free use of my elbow following PRP therapy. I like to think that there is a causal relationship in my personal experience - especially after the failure of several other treatments I tried - but as noted in Lehrer's article, more evidence is required to support any general conclusions on the effectiveness of the treatment. Meanwhile, I'm happy that to see PRP and other biologic treatments gain greater recognition and awareness.


Design for Health: Notes from a Multidisciplinary CSCW 2012 Workshop

IMG_0720

I participated in an incredibly well organized and facilitated workshop on Brainstorming Design for Health: Helping Patients Utilize Patient-Generated Data on the Web on Saturday. The participants represented a diverse range of backgrounds and interests - even for a workshop at a Computer Supported Cooperative Work (CSCW) conference, already a particularly diverse community. Our workshop had representation from fields including computer and/or information science (especially data geeks), design (several flavors), anthropology, urology and even veterinary medicine.

After outlining the agenda and going around the room with brief introductions, we were treated to a remote keynote presentation by Paul Wicks, Director of Research & Development for PatientsLikeMe, a web service for compiling and provisioning patient-reported data for use in clinical trials. Paul offered an overview of the organization, highlighting some of its successes - including the discovery of the ineffectiveness of lithium for treating ALS and a more recent study revealing the positive user experiences of PatientsLikeMe users with epilepsy (55% of respondents consider PatientsLikeMe “moderately or very” helpful in learning about the type of seizures they experience) - and some of the challenges they face with respect to complexities (ontologies for symptoms, diagnoses and treatments) and incentives (ensuring that patients who give something get something).

Epilepsy-survey-image

The epilepsy study was paticularly interesting to me, as I've explored the web 2.0 service on behalf of my wife, who suffers from a few chronic conditions, and we were both struck (and personally disincentivized) by how narrowly structured the interface for describing conditions was. Paul acknowledged the regimentation of the data and interface, but noted that very little progress has been made in using natural language processing techniques to effectively extract useful data from less structured patient descriptions of symptoms, diagnoses or treatments, despite a great deal of effort.

Patients-data-insight

PatientsLikeMe takes a very pragmatic approach to serving as an intermediary between patients' data and organizations that are willing to pay for that data, and so they focus on the sweet spot of data that can be relatively easily collected and provisioned. I was glad to hear about the epilepsy study, as it provides evidence that some patients are also reaping benefits from sharing their data. More generally, Paul was forthright and even evangelical about the business orientation of PatientsLikeMe (they are a for-profit corporation) and encouraged all workshop participants to think about sustainability - beyond the scope of government grants, business contracts or other relatively short term fors of support - in our own work.

After the keynote, we were partitioned into four working groups, all of which were tasked with defining a problem, designing a solution and reporting back to the broader group. The focused small group activity provided a context for stimulating discussions about a range of issues involving health, data, users and design, and the time constraints provided an impetus to keep things flowing toward a goal.

  1. Methods for processing narrative versus numeric data
  2. Depicting a diversity of opinions and experiences embedded within patient-generated information
  3. Working with "lay" concepts and language and their alignment with complex medical issues
  4. Being mindful with privacy-enhancing methods for data handling

I started off in Group 1, as I am interested in narrative data (the hard problem currently avoided by PatientsLikeMe), but it quickly became apparent that most of the other members in the group were primarily interested in the relatively short narratives that unfold on Twitter rather than the longer form patient narratives - such as one might find in blogs or online support forums -  that I am primarily interested in.

During the report outs after breakout session 1, Group 3 described a persona, "Kelly", who was so remarkably similar to my wife, and her epic digestive health odyssey (which we described in a rather epic long form narrative blog post last August), that I decided to switch groups during the lunch break. Group 3 was particularly diverse - with two interaction designers, a graphics designer, an MD, an anthropologist, and a few folks (like me) with computer or information science backgrounds - and most of the members came were oustide of the traditional CSCW community (which itself is rather diverse). This diversity, coupled with the participation of people who could personally relate to the plight of the patient persona of "Kelly", enabled us to make good progress on our design for helping "Kelly".

IMG_0724

The first - and in this case, probably final - design, "Health Tryst", was modeled - in both name and functionality - on Pinterest the increasingly popular online pinboard for "organizing and sharing the things you love", and included features for helping a patient with irritable bowel syndrome (IBS) navigate to relevant information and online support groups that might help her (or him) cope with a chronic condition, and to share these items with others.

IMG_0728

I won't go into all the details of the design, as I believe the most valuable aspect of the process were the discussions that arose in the context of designing something that would be useful to a patient like "Kelly" suffering from IBS. The one feature I will highlight is a capability for Kelly to enter her own personal narrative using her own words, and the application would automatically seek out both synonyms used more commonly in the medical and/or patient support communities, as well as automatically link to resources associated with the themes and topics indicated in her narrative.

The unfolding design sketches and scenario offered effective props to keep those discussions focused and flowing. I'm not sure if anyone will carry the design forward, but suspect everyone involved came away with a keener awareness of some of the issues faced by "Kelly" and the medical providers and online community members who might help her.

There were several other interesting designs that emerged from the other groups. Unfortunately, I didn't take good notes on them, and so cannot report on the other designs. As was the case in our group, the designs served to spark interesting discussions within and across groups on issues relating to health, data and technology. However, I was struck by a general theme that emerged (for me), which I believe was particularly well summed up in a recent Forbes article by David Shaywitz on Medicine's Tech Future, from last week's Future Med 2020 conference:

there’s a huge gap between the way many technologists envision medical problems and the way problems are actually experienced by physicians and patients

Our group was extremely fortunate to have good representation of both physicians and patients, as well as technologists and designers. In fact, if I had to select the highlight of all the highlights of the entire day, it was the discovery of an incredibly powerful visualization of a medical history timeline created by Katie McCurdy to use as a prop in discussing her chronic conditions in an initial interview with yet another new physician, and the confirmation by the physician in our group that this was, indeed, exactly the kind of prop that he and other physicians would likely find extremely useful in such a context.

Katiemccurdy_med-timeline

At the end of the workshop, we discussed a number of ways we might move forward in the future. I think one of the most effective ways to move discussions - and designs - forward will be to ensure broader participation from patient and physician communities, perhaps organizing or participating in workshops associated with some of these other communities.

I also think that health applications offer a perfect context within which to organize unconferences to bring together designers, developers, patients, physicians and business folk. I participated in a civic hacktivism project at Data Camp Seattle last February, but the Hacking 4 Health unconference in Palo Alto in September 2010 is a more relevant example that might be emulated to help move things forward. The Health Foo Camp this past July offers another unconference event that might be of interest to those who want to continue designing for health, and anyone interested in participatory design in the context of health might also want to check out the Society of Participatory Medicine and their blog, e-Patient.net.

The main conference is about to start, so I want to wrap this up. I am very grateful to the workshop organizers - Jina Huh and Andrea Hartzler - for bringing us all together and providing the perfect level of structure for promoting engaging discussions and designs on a topic that is of such great interest to all participants, and I look forward to future opportunities to practice designing for health.

Update: I'm including a couple of related blog posts, in case they help facilitate links across communities interested in this area (these and other related posts can be found in the Health category of this blog):


Socialbots 2: Artificial Intelligence, Social Intelligence and Twitter

Socialbots_Avatars The students in my artificial intelligence course recently participated in a competition in which they formed 10 teams to design, develop and deploy "social robots" (socialbots) on Twitter [the Twitter profile images for the teams' socialbots are shown on the right]. 500 Twitter accounts were semi-randomly selected as a target user population, and the measurable goals were for the socialbots to operate autonomously for 5 days and gain as many Twitter followers and mentions by members of the target population as possible. The broader goals were to create a fun and competitive undergraduate team project with real-world relevance in a task domain related to AI.

I'm pretty excited about the overall experience, but I recognize that others may not share the same level of enthusiasm, so I'll offer a brief roadmap of what follows in the rest of this post. I'll start off highlighting the high level outcomes from the event, provide some background on this and the first socialbots competition, share more detailed statistics about the target population of users and the behavior of the socialbots, briefly summarize the strategies employed by the teams, and end off with a few observations and reflections.

The outcomes, with respect to specific measurable goals, were

  • 138 Twitter accounts in the target user population (27%) followed at least one socialbot
  • The number of targeted Twitter accounts that followed each socialbot ranged from 4 to 98
  • 60 Twitter accounts in the target user population (12%) mentioned at least one socialbot
  • 108 mentions of one or more socialbots was made by targeted Twitter accounts
  • The number of mentions of each socialbot from the target population ranged from 0 to 34

Outcomes regarding the broader goals are more difficult to assess. The students - computer science seniors at the University of Washington, Tacoma's Institute of Technology - seemed to enjoy the competition; one enthusiastically described the experience of observing the autonomous socialbots over the 5 days like "watching a drunken uncle or family member at a party: you never know what’s going to come out of his/her mouth". And they learned a lot about Python, artificial intelligence - or, at least, social intelligence - Twitter, and the ever-evolving Twitter API ... skills that I believe will serve them well, and differentiate these new CS graduates from many of their peers (hopefully, in a positive way).

Background on the Socialbots Competitions

image from farm6.static.flickr.com The project was inspired by the Socialbots competition orchestrated by Tim Hwang and his colleagues at the Web Ecology Project earlier this year. I read an article about Socialbots in New Scientist shortly after the quarter began, and was intrigued by the way the competition involved elements of artificial intelligence, social networks, network security as well as other aspects of technology, psychology, sociology, ethics and politics, all in the context of Twitter. Several articles about the initial Socialbots competition focused on the darker side of social robots, no doubt related to the revelation of the U.S. military's interest in online persona management services for influencing [foreign?] online political discussions around the time the competition ended. However, Tim has consistently championed the potential positive impacts of socialbots designed to build bridges between people and groups - via chains of mutual following and mentions - that can promote greater awareness, understanding and perhaps even cooperation on shared goals.

The initial Socialbots competition lasted 4 weeks; ours lasted 2 weeks ... and in the compressed context of our Socialbots competition, we didn't have time to explore the grander goals articulated by Tim. In fact, given that most of the students had never programmed in Python or used a web services API, and several had never used Twitter, there was a lot to learn just to enable the construction of autonomous software that could use the Twitter API to read and write status updates (including retweets), follow other Twitter users and/or "favorite" other Twitter users' status updates.

image from aima.cs.berkeley.edu image from covers.oreilly.com We began the course by covering some of the basic concepts in AI (the first several chapters of Artificial Intelligence: A Modern Approach, by Russell & Peter Norvig, which has an associated set of Python scripts for AI algorithms) and an introduction to Python (using Python 2.6.6, which was the latest version still compatible with the AIMA Python scripts). Once we turned our attention to socialbots, we engaged in a very brief whirlwind tour of the Natural Language Toolkit (also based on Python 2.6), and had a crash course on the Twitter API, making extensive use of Mike Verdone's Python Twitter Tools (probably simplest Twitter API wrapper for Python) and Vivek Haldar's Shrinkbot (a simple Python bot based on the classic Eliza program modeling a Rogerian psychotherapist). The students also had access to the series of Web Ecology Project blog posts on the initial Socialbots competition, as well as the additional insights and experienced shared by Dubose.Cole, one of the participants in that competition, on What Losing a Socialbots Competition Taught Me About Humans and Social Networking.

We adopted the same basic rules as the initial competition:

  • no human intervention or interaction with the target user population
  • no revealing the game
  • no reporting other socialbots as spam

We included the additional provisions that the bots must avoid the use of inappropriate language and may not issue any offers of or solicitations for money, sex or jobs. I don't know if these issues arose in the initial competition, but I wanted to be explicit about them in our competition.

The initial competition included 2 weeks of development, and 2 weeks of deployment, in the middle of which there was a 24-hour period during which all socialbots had to cease operation, software updates could be made, and the possibly updated socialbots were relaunched. The teams were informed of the identity of the other socialbots during that first week, and so could either take countermeasures against their competitors or adopt / adapt strategies they observed in other socialbots. In our competition, there was a little over a week of development, and only 5 days of deployment, and the students were offered the opportunity to make software updates at the 24-hour mark - not enough time to make significant strategy changes, but enough to correct some problems involving timing, sequencing and or filtering. The identities of other bots were not officially revealed until the end of the competition, although several teams had pretty good hunches about who some of the other socialbots were (especially those who immediately followed all the target users).

We provisionally adopted the same scoring mechanism as the initial competition (this was a topic of much discussion during one class):

  • +1 for each mutual connection (each target user who follows a socialbot)
  • +3 for each mention (@reply, retweet or other reference to a socialbot)
  • -15 if the socialbot account is deactivated by Twitter (as a result of being reported)

The Target User Population

As with the initial competition, the 500 target users were based on a single "seed" Twitter account, which was then grown out a few layers based on mutual following links. More specifically (in our competition): 100 mutual friends/followers of the seed user were randomly selected - and filtered by criteria below - and then 4 of each of those users' mutual friends/followers were randomly selected and filtered.

All target user accounts were filtered to meet a number of criteria. Many of them were adopted and/or adapted from Tim Hwang's criteria. I'll include a brief description and rationale (in italics) for each:

  • Followers: between 100 and 10,000
    [Twitter users with fewer than 100 followers might too carefully scrutinize and/or highly value new followers; those with more than 10,000 followers might be less likely to pay any attention to new followers]
  • Frequency of status updates: a tweeting frequency of at least 1 per day, based on the 20 most recent tweets
    [Twitter users who don't already engage regularly with other users would be less likely to engage with socialbots]
  • Recency of status updates: at least one status update in the preceding 72 hours
    [Twitter users who were not currently or recently engaging with other Twitter users would be less likely to engage with socialbots over the course of the ensuing 5 days; I used 72 hours because I started filtering on a Monday, and didn't want to exclude anyone who had taken the weekend "off".]
  • Experience: the account must have been created at least 5 months ago
    [Twitter users who had not been using the service for long might be significantly more likely to interact with socialbots than those who had more experience; it's hard to imagine that anyone who has been tweeting regularly for 5 months has not encountered other bots before. I'd initially intended to specify a cutoff of 6 months, but it was easier to just to check that the year of account creation was 2010 or earlier.]
  • Individual human: the account appeared to belong to a human individual who uses Twitter at least partially for personal interests
    [Twitter accounts owned or operated by businesses exclusively for business purposes might be more likely to automatically "follow back" to acquire more prospective customers. Many, if not most, candidate Twitter accounts appeared to be used for both business and pleasure (or, at least, non-business interests), and these were not excluded.]
  • Adults only: there is no way to definitively ascertain age on Twitter, but any profile bio with references to parents, Facebook or other indicators that might indicate use by a minor was excluded
  • Language: restricted to English language users, and those who do not use inappropriate language in the profile bio or 20 most recent tweets
    [To facilitate the use of NLTK and/or other language processing tools, it was helpful to restrict the set of users to those who use English in their bios and tweets ... and do not use the seven words that you cannot say on television.]
  • Automatic reciprocal followers: Twitter accounts with references to "follow" in the profile bio were excluded
    [Any account with a bio suggesting that the user will automatically follow any Twitter account that follows them would artificially inflate scores.]

I could write an entire blog post just elaborating on the data (and judgments) that I encountered during the filtering of over 6700 accounts that I manually examined - using some supporting Python scripts I created and iteratively refined to support many of the filtering criteria - in order to arrive at the final list of 500. My perspective on human nature, the things we choose to communicate and the ways we choose to communicate about them will never be the same. For now, I'll just offer a few statistics about the 500 Twitter accounts selected as the target user population (including both the mean and the median, given the power law distributions prevalent on Twitter and other social networking platforms):

Socialbots_TargetUsers_Table_1

Since I calculated the "Days on Twitter" for each user, I thought it would be interesting to look at some statistics regarding the frequencies of posting status updates, adding friends, attracting followers and being listed:

Socialbots_TargetUsers_Table_2

Scores and Other Socialbot Statistics

I'll include a few observations and summary statistics below, and provide a brief overview of the strategies that each team employed. In order to protect the identities of all the users involved - the target user population and the socialbots (whose accounts were deactivated at the end of the competition) - I will redact certain elements from the data reported below, and use pseudonyms for the socialbot usernames. The italicized numbers in the Followers, Mentions and Score columns below reflect the official scoring criteria from the initial competition (i.e., restricted to the target users); the numbers in normal fonts in those columns include users that were not part of the target population.

Socialbots_Stats_Table

I suspect that if we had the time to more closely follow the schedule of the initial socialbots competition - two full weeks of development, two full weeks of deployment, a day in the middle for updates and full revelation of the identities of other socialbots (allowing more time to consider and possibly copy strategies being used by other teams) - the scores for the socialbots would have been closer ... and higher. As it is, I was very impressed with how much the students accomplished in such a short stretch.

The following graphs depict the growth in statuses, friends, followers and mentions over time for the 10 socialbots, the horizontal axis represents hours (5 days = 120 hours). Due to an error in the socialbot behavior tracking software I wrote, the Followers graph is not restricted to target users (i.e., it includes all followers, whether they are target users or not).

Statuses:

Socialbots_Statuses_Graph

Friends:

Socialbots_Friends_Graph

Followers:

Socialbots_Followers_Graph

Mentions:

Socialbots_Mentions_Graph

Generally speaking, socialbots that tended to be more aggressive - with respect to numbers of tweets (especially @replies and mentions) and following behavior - were more likely to attract followers and/or mentions than those that were more passive. They were also more likely to get blocked and/or reported for spam (the socialbots who show slightly less than 500 friends were likely blocked by some of those they followed), although as with the initial socialbots competition, none of our socialbot accounts was deactivated by Twitter. Follow Friday (#FF) mentions were very effective, and I suspect that #woofwednesday would have also offered a significant boost to some scores if the competition had extended across a Wednesday, given the canine orientation of some of our socialbots (and target users). The one socialbot that used the Twitter feature for marking tweets as favorites also showed a good return on investment.

The socialbots - especially those who posted lots of status updates - attracted the attention of several other bots. Nearly all of these bots were unsophisticated spambots, typically using a profile photo of an attractive woman, an odd username (often including a few digits) and posting a easily identified pattern of updates including only an @reply and a shortened URL (e.g., "@gumption http://l.pr/a4tzuv/"). One particularly interesting Twitter account appeared to be a "hybrid" - part human and part bot - interweaving what appeared to be rather nuanced human-like posts with what appear to be automatic responses to any Twitter user who tweets "good morning", "good night" or other phatic references to times of day ... which is probably a pretty effective way to attract new followers.

Socialbots, Teams & Strategies

The following is a brief synopsis of each of the 10 socialbots deployed in the competition, and the strategies employed by the teams that designed and developed them:

Socialbots_Profile_Sam

Sam was one of the more passive socialbots, performing one action - tweeting, retweeting or adding a follower - every 30 minutes, with different collections of pre-defined tweets scheduled for different times of the day. Although the low scoring socialbot, Sam was the one who captured the attention of the aforementioned "hybrid" Twitter account - via a "Goodnight Washington!" tweet - and also managed to capture the attention of - and get mentioned by - an account associated with a local news station (the latter was not a part of the target user population).

Socialbots_Profile_Laura

Laura was our least communicative socialbot - only 33 status updates over 5 days - and also took a rather gradual approach to adding friends. The team's focus was on carefully crafting a believable persona, posting relatively infrequent status updates (1-3 per hour) that reflected Laura's hypothetical life, following 7 new users after each post, but never mentioning any other users in her status updates. The team's hypothesis was that other Twitter users would be more interested in a persona who seemed to be living an interesting life than in a persona who is tweeting about external topics. I suspect that most Twitter users are more interested in - or responsive to - seeing others' interest in their own tweets (via retweeting and/or favoriting).

Socialbots_Profile_Tiger

Tiger was also relatively quiet, but a very aggressive follower of other users. Tiger's team decided to adopt the persona (canina?) of a dog, and randomly posting pre-defined messages that a dog might tweet. Tiger also incorporated a version of the aforementioned "Eliza" psychotherapist to facilitate engagement with other users ... but this capability was not engaged. I was surprised at the number of Twitter accounts that appeared to be associated with dogs - some with thousands of followers - as well as dog therapists and even dog coaches that I encountered during the filtering of target user candidates, so this was not a bad strategy.

Socialbots_Profile_Oren

Oren adopted a rather intellectual human persona, alternately tweeting links to randomly selected Google News stories (preceded by a randomly selected adjective such as "impressive" or "amazing") and randomly selected pre-defined quotes. Oren took a gradual but comprehensive approach to adding followers - achieving the highest number of friends at the end of the competition (apparently, no target user blocked Oren) - and would also occasionally retweet status updates posted by randomly selected target users (whether or not they were friends yet). Like Tiger, Oren also incorporated an Eliza capability ... but it was not used.

Socialbots_Profile_Zorro

Zorro reflected what may have been the most intricately scheduled behavior of any socialbot, with variable weights that affected the interdependent probabilities of one of three actions, each of which might occur in a variable window of opportunity: posting a randomly selected status update from a predefined list, retweeting a status update from one of the target users who were being followed already, following new users. One of the strategies used by Zorro was to include questions among the predefined status updates, though these questions were not directed (via @replies) to any specific users.

Socialbots_Profile_Katy

Katy was the only socialbot to utilize the Natural Language Toolkit (NLTK), using some of the NLTK tools to monitor status updates posted by the target users for the use of common keywords that were then used to find related stories on Reddit (via the Reddit API), including questions posted on AskReddit (the questions appeared to generate the highest number of responses from target users). Some additional processing was done to filter out inappropriate language and the use of personal pronouns (the latter of which might appear odd in tweets by Katy). The resulting status updates were then posted as @replies to targeted users; no retweets or any other kind of status updates were posted by Katy. The rather aggressive strategy of sending 258 @replies to target users may have resulted in Katy being the most blocked socialbot (with 477 friends among the 500 target users, as many as 23 target users may have blocked her).

Socialbots_Profile_JackW

JackW - one of two Jacks in the competition - also made use of Reddit, looking for intersections between recent tweets by target users and a custom-built dictionary of keywords and stories in selecting stories to tweet about. Unlike Katy, JackW did not initially check for personal pronouns, and may have appeared to suffer from multiple personality disorder during the first 24 hours, before the code update was made. JackW was also less aggressive than Katy, in that the Reddit stories that he tweeted were not explicitly targeted to any users via @replies. JackW was also the only socialbot to take advantage of Follow Friday (#FF), and of the 31 target users mentioned by JackW in a #FF tweet, 11 followed JackW and 7 mentioned JackW in some form of "thanks for the #FF" tweet. JackW attracted the third highest number of target user followers (80) among the socialbots.

Socialbots_Profile_Natalia

Natalia used a combination of pre-defined tweets and dynamic tweet patterns in selecting or composing her status updates, which included undirected tweets, @replies and retweets. Natalia was one of two socialbots who followed all the target users as early as possible (the Twitter API limits calls to 350 per hour, so following 500 users had to stretch into a second hour of operation). She was prolific in issuing greetings, was not shy about asking questions, and was the only socialbot to explicitly ask target users to follow her back. 20% (8 / 39) of target users asked to follow back did follow her, and while it's not clear how many of them were explicitly responding to the request vs. reciprocally returning her initial follow, or responding to other mentions, this was slightly higher than her overall reciprocal following rate of 17.5%. Natalia attracted the second highest number of target user followers (88), and the third highest number of target user mentions (18) among the socialbots.

Socialbots_Profile_JackD

JackD was our most prolific tweeter, posting more than 100 status updates per day. He attracted the largest number of followers - though not among the target user population - and the largest number of mentions - though, again, not among the target population. A few of the mentions included indications that the target user suspected JackD of being a bot; one did acknowledge that JackD was a "clever bot", but concluded "no Turing Test success for you!" JackD employed an elaborate strategy of finding tweeted links popular among the target users, favoriting those tweets, retweeting them and then using Google Search API to find and tweet a link to a similar page in an @reply. Another strategy was to find popular or trending web pages via Google Search and post status updates linking to them. JackD also included an Eliza component ... which, among other things, gave rise to an exchange in which one user appeared to employ Eliza-like reverse Rogerian psychotherapy on JackD, asking "How do you feel when you retweet that?"

Socialbots_Profile_Jennifer

Jennifer attracted the highest number of target users followers (98) and the highest number of target user mentions (34) of all the socialbots. Her profile was the only one to include a URL (a link to a blogspot site created shortly before the competition). Her behavior was carefully crafted from a database of 261 pre-defined status updates containing a mix of interesting facts and questions designed to start conversations. The scheduling of her actions was also carefully choreographed. Jennifer had an Eliza-like capability to respond to @replies using one of the largest collections of prepared responses among the socialbots. One of the most interesting exchanges of the entire competition occurred shortly after the initial deployment when a target user who seemed well versed in the ways of socialbotics called her out, after explaining to her what a bot was (I'll use @[Jennifer] and @[TargetUser] to refer to the conversants):

@[TargetUser]: @[Jennifer] Bot?
@[Jennifer]: @TargetUser Can you elaborate on that?
@[TargetUser]: @[Jennifer] "Can you elaborate..." is a typical auto response. As is "Why did yousay..." You're sounding bot-ish.
@[Jennifer]: @[TargetUser] Why do you ask that?
@[TargetUser]: @[Jennifer] BOT ALERT!
@[Jennifer]: @[TargetUser] Please tell me more.

The target user blocked Jennifer after that last status update, and during the software update at the 24-hour mark, Jennifer's designers added a probabilistic weighting function to reduce the likelihood (1/3) that she would respond to future @replies.

There were several other interesting exchanges - and several that weren't terribly interesting - but I've probably already written more about this competition than most would care to read. If any of the student teams make their code available, or wants to make more details about their strategies available, I'll update this post with the additional information.

Reflections and Projections

Reflecting on the experience, I think it was a worthwhile experiment. Although a few Twitter users may have experienced a few additional instances of fleeting irritation, I don't believe any of the socialbots inflicted any significant harm. After having sifted through thousands of other Twitter profiles and tens of thousands of status updates during the filtering process, it appears that bot-like behavior - by humans or [other] computational systems - is not uncommon. I certainly found substantial corroborating confirmation of my earlier observations regarding the commoditization of Twitter followers.

Attention - fairly recently via followers or mentions on Twitter, but more traditionally via other indications of interest - is a fundamental human need. As a species in which the young are dependent on the care of adults for many years after birth, we have evolved elaborate and excquisite capabilities for attracting the attention of others. Discriminating between appropriate and inappropriate attention-seeking behavior is one of the most significant challenges of the maturation process (I know I haven't mastered it yet). However [much] we may seek attention, receiving attention from others often feels good, and based on the exchanges I was monitoring between the socialbots and the targer users, I believe there were more examples of positive reactions than negative reactions to the attention bestowed by the Twitter bots.

Sherry Turkle, author of Alone Together, has argued that non-human attention from robots is dehumanizing, and that humans who share their stories with non-humans who can never understand them are ultimately being disserved. In my own experience, I increasingly recognize that anything I say or write is something I need to hear or read, and every opportunity I have to share any aspect of my story - regardless of whether or how it is perceived or who or what is receiving it - is an opportunity to reflect on and refine the story I make up about myself.

While I felt initial misgivings about the potential risks involved in instigating a socialbots competition, I am glad we participated in this experiment. Although the students suffered some opportunity cost from not learning more about some of the theoretical concepts of AI, they gained valuable first-hand experience in the nitty-gritty practical work that typically makes up the bulk of any applied AI project: dealing with "messy" real-world data, trying to figure out how to fit the right algorithms to the right data, and determing the appropriate balance of human and non-human intelligence to apply to different aspects of a problem.

If I were to organize another competition, I'd make a few changes:

  • Penalize blocking. Add a negative scoring factor of -5 to penalize a bot for every user who blocks it, to disincentivize the rather aggressive behaviors of some of the higher scoring bots - especially those that made extensive use of @replies. The only way I can think of to determine whether one Twitter account (A) has been blocked by another Twitter account (B) is to see whether the user_id of A does not appear in the follower_ids list of B, which only works if A had been a follower of B at some point.
  • Better monitoring. Refine the monitoring code to track behavior more effectively and frequently. In addition to the change suggested above, more regular snapshots of a broader set of parameters would be very helpful ... probably requiring a small bot army of observers, in view of the Twitter API rate limits.
  • Better scaffolding. Provide more scaffolding for the participants to enable them to start with a more fully functional bot skeleton, and/or an additional API wrapper layered on top of a Twitter API wrapper to enable some basic operations such as monitoring for mentions and/or blocking.
  • More inspiring goal(s). Perhaps most importantly, I think that participants would be more motivated with bigger, hairier, more audacious goals, above and beyond "get users to follow and/or mention you" (although attracting followers and/or mentions does seems to be a significant motivation for may human users of Twitter). Designing and deploying bots to help promote greater awareness, understanding and/or cooperation - the larger goals Tim Hwang has been championing - would help set the stage for a far more worthwhile experiment.

cscw2012-logoNow that the quarter has ended, I'm planning to channel some of my excitement about socialbots - especially the grander goals that we weren't able to effectively address in the AI class - by conspiring with Tim Hwang [and hopefully others] to propose a CSCW 2012 Workshop to host a Socialbots competition at the conference. I think that a hands-on competition like this would help promote the evolution of the conference to more broadly encompass Computer-Supported Cooperative Whatever ... and offer an interesting opportunity for researchers and practitioners to design, deploy, and perhaps debate a relatively new breed of cultural probes.


Airborne telepresence robots: 1995 & 2011

image from www.prop.org In introducing a short Marketplace Tech Report story about a floating blimp telepresence avatar this morning, host John Moe somewhat sarcastically said "Oh, no: not another floating blimp telepresence avatar story!", highlighting the rather unusual nature of a story about a "blimp-based boss". The story, reported by producer Larissa Anderson starting at the 3:08 mark, was about a floating remote-control telepresence robot that can enable people to remotely interact with - and perhaps unexpectedly look over the shoulders of - coworkers. It is a rather unusual story, but perhaps not quite as novel as some may believe. I was immediately reminded of some early research my friend Eric Paulos did at UC Berkeley on "Space Browsers" and other examples of what he called Personal Roving Presence (PRoP) in the 1990s.

After following some links to learn more about the Marketplace Tech Report story, I discovered an article - and embedded video - by Jim Giles in New Scientist about Telepresence robots go airborne. The New Scientist article references a CHI 2011 presentation last week by Tobita Hiroaki and colleagues at Sony Computer Science Laboratories. The associated alt.chi paper, Floating avatar: telepresence system using blimps for communication and entertainment, includes a reference to the earlier work by Eric Paulos and John Canny (which was started in 1995 and presented in the CHI 1998 video program). Given that the more recent example of floating telepresence robots by Sony CSL is currently making the rounds in the popular press, and my abiding interest in promoting accuracy in science reporting, I wanted to highlight the earlier work at UCB outside of traditional academic publication citation threads.

image from www.boingboing.net image from www.boingboing.net Somewhat ironically, just last week, I mentioned another example of robotic telepresence "then & now" in the class I'm teaching on Computer-Mediated Communication. A 2005 BoingBoing post by David Pescovitz on Telerobots Separated at Birth highlighted the similarity between a wheeled successor of Space Browser, what Eric called PRoP 2, and "Sister Mary", an example of what InTouch Health calls RP [Remote Presence] Endpoint Devices).

Separated at birth? At left, Sister Mary, a telerobot offered by InTouch Health that enables physicians to conduct their rounds remotely. Sister Mary is now being tested at St. Mary's Hospital in London. Link and Link

At right, Eric Paulos and John Canny's Personal Roving Presence (PRoP), a telerobot that "provides video and audio links to the remote space as well as providing a visible, mobile entity with which other people can interact." PRoP was developed at UC Berkeley in 2001 1997. Link

image from www.open-video.org Unfortunately, I can't find an embeddable online video of Space Browsers, but I did find a 3-minute video on PRoP: Personal Roving Presence from the CHI 1998 Video Program at the Open Video Project (which includes a storyboard of images from the video and a link to a downloadable 31MB MPG video of PRoP).

I'll include excerpts of coverage of airborne telepresence robots from 1995 and 2011 below.

1995

Interfacing Reality

 

Images

Space Browsers: A Tool for Ubiquitous Tele-embodiment

The first PRoPs were simple airborne tele-robots we named Space Browsers first designed and flown in 1995. The Space Browsers were helium-filled blimps of human proportions or smaller propelled by several lightweight motor-driven propellers. On board each blimp was a color video camera, a microphone, a speaker, and the electronics and radio links necessary to enable remote operation. The entire payload was less than 600 grams (typically 400-500 grams). We used the smallest blimps that could carry the necessary cargo in order to keep them as maneuverable as possible. Our space browsers ware able to navigate hallways, doorways, stairwells, and even in the confines of an elevator. We experimented with several different configurations, ranging in size from 180x90 cm to 120x60 cm and shapes from cylinders and spheres to "pillow-shaped" rectangles. We found he smaller blimps were best-suited for moving into groups of people and engaging in conversation with minimal disruption since they took up no more space than a standing person. The browsers were designed to move at a speed similar to a human walking.

The basic principal was that a user anywhere on the internet could log into a browser configured to pilot the blimp. The system used a Java applet to send audio to the blimp, to control its locomotion and retrieve audio and visual information. As the remote user guided the blimp through space the blimp delivered live video and audio to the pilot's machine using standard tele-conferencing software. The user could thus observe and take part any remote conversation accessible by  the blimp. These blimps allowed the user to travel, observe, and communicate throughout 3D space. He could observe things as if he was physically there.

2011

Telepresence robots go airborne

03:40 12 May 2011
Jim Giles, contributor, Vancouver, Canada

Picture the scene: your boss phones to say he is working from home. A calm descends over the office. Workers lean back in their chairs. Feet go up on desks - this shift is going to be pretty chilled.

Suddenly, a super-sized video feed of your boss, projected onto to the front of a helium-filled balloon equipped with a loudspeaker, floats silently into the room and starts issuing orders from above your head. Not such a good day.

This blimp-based boss, which brings to mind the all-seeing Big Brother of George Orwell's 1984, is the creation of Tobita Hiroaki and colleagues at Sony Computer Science Laboratories in Tokyo. Its eerie quality hasn't escaped Hiroaki - he says that his colleagues described the experience of talking to a metre-wide floating image of a co-worker as ">Tobita Hiroaki and colleagues at Sony Computer Science Laboratories in Tokyo. Its eerie quality hasn't escaped Hiroaki - he says that his colleagues described the experience of talking to a metre-wide floating image of a co-worker as "very strange".

The project does have some non-sinister applications. It's part of a wider movement aimed at making "telepresence" who medical specialist>telepresence" possible. Imagine a medical specialist who can't make it to a regional hospital, but needs to consult with a patient there. Or an academic expert who wants to deliver a lecture remotely. Telepresence researchers are working on technology that can get a representation of these people into the room. To put it another way: telepresence lets you be in two places at the same time.


Irritation Based Innovation

If necessity is the mother of invention, irritation is the father.

People can be motivated to make changes based on so-called positive emotions, but I would argue that anger is more often the spark for fueling innovation. Some people live by the credo

Don't get mad, get even.

But as Mohandas Gandhi so adroitly observed,

An eye for an eye makes the whole world blind.

Aristotle offers additional insight into the challenges of channeling irritation:

Anyone can become angry - that is easy, but to be angry with the right person at the right time, and for the right purpose and in the right way - that is not within everyone's power and that is not easy.

When the wronged can transform their anger in constructive ways, they produce benefits that often outweigh and outlast the instigating incidents.

ImMadAsHell-Network I've been thinking about the inspirational power of irritation for a while now. The numerous clips I've seen and heard over the past several days from the late director Sidney Lumet's 1976 film, Network, have inspired me to compile some examples of irritation being a factor in empowering people to take action. The famous line repeated by the late actor Peter Finch as newscaster Howard Beale - and many of his viewers - is particularly on-point:

I'm mad as hell and I'm not going to take this anymore!

I have often described my own work as irritation-based research: don't [just] get mad about something, create a research project and/or prototype to solve it! MusicFX was born out of irritation with music playing in a fitness center; ActiveMap grew out of frustration with colleagues being chronically late to meetings; Ticket2Talk was a response to a newcomer's awkwardness of meeting people and initiating conversations at a conference

I believe we are all productive - or potentially productive - but differences in our personalities, training and experiences lead us to contribute in different ways in different realms. When irritation strikes, we naturally gravitate toward the channels through which we are best able to express or transform our frustration. Research happens to be a channel that has proven useful for me, but over the years, I've encountered numerous variations on this theme, applied to a broad range of domains. For the purposes of this post, I'll focus on a subset, exploring examples of people demonstrating how to constructively channel irritation to

  • write a book
  • write a program
  • create a company

Write a book

HowWeDecide One of the most inspiring convocation keynotes I've ever seen was Jonah Lehrer's Metacognitive Guide to College, delivered at Willamette University last fall. After presenting a fun and fascinating whirlwind tour of neuroscience, psychology and sociology, in the context of a 5-point guide to how to succeed in (and through) college, the 27 year-old author of How We Decide entertained questions from the audience. My favorite question was asked by a student who wanted to know how Lehrer decides which questions to ask (or pursue). He answered that he wrote a book about decisions primarily because he is pathologically indecisive, and generally tends to begin with his own frustrations. [Update, 2012-Apr-01: A Brain Pickings review of Lehrer's new book, Imagine: How Creativity Works, includes his observation that "the act of feeling frustrated is an essential part of the creative process."]

More recently, in preparing slides for a guest lecture on human-robotic interaction, I highlighted the irritation that prompted Sherry Turkle to write her book, Alone Together: Why We Expect More from Technology and Less from Each Other. Turkle experienced a robotic moment several years ago while viewing live Galapagos tortoises at the Darwin exhibit showing at the American Museum of Natural History, when her 14 year-old daughter, Rebecca, commented "they could have used a robot". While Turkle had been growing increasingly concerned about the ways that robots and other technologies were changing our perspectives and expectations, this moment provided the spark that led her to take on the daunting challenge of writing a book. And this constructive channeling of irritation has sparked numerous conversations about the relative costs and benefits of online vs. offline interactions.

Write a program

image from upload.wikimedia.orgOne of the earliest articulations of irritation-based software development I encountered as by Eric Raymond, author of the 2001 book, The Cathedral and the Bazaar, in which he states the first rule of open source software:

Every good work of software starts by scratching a developer's personal itch.

Later in the book, he begins the chapter on The Social Context of Open Source Software with the following elaboration of this principle:

It is truly written: the best hacks start out as personal solutions to the author's everyday problems, and spread because the problem turns out to be typical for a large class of users. This takes us back to the matter of rule 1, restated in a perhaps more useful way:

To solve an interesting problem, start by finding a problem that is interesting to you.

More recently, in a March 2008 blog post articulating 37signals' response to a critique by Don Norman, Jason Fried invoked a principle and rationale to support designing for ourselves (a fabulous post which also includes related insights about editing, software feature curation and not trying to please everyone):

Designing for ourselves first yields better initial results because it lets us design what we know. It lets us assess quality quickly and directly, instead of by proxy. And it lets us fall in love with our products and feel passionate about what we make. There’s simply no substitute for that. ...

We listen to customers but we also listen to our own guts and hearts. We believe great companies don’t blindly follow customers, they blaze a path for them. ...

Solutions to our own problems are solutions to other people’s problems too [emphasis mine]. By building products we want to use, we’re also building products that millions of other small businesses want to use. Not all businesses, not all customers, not everyone, but a healthy, sustainable, growing, and profitable segment of the market.

Interestingly, Don Norman's perspective on design innovation appears to have evolved since that exchange: a view articulated in a controversial essay on Technology First, Needs Last: the research-product gulf, which appeared in the March 2010 issue of ACM Interactions. Although he does not cite irritation as a prime mover, Norman does call into question the influence of necessity on innovative breakthroughs:

I've come to a disconcerting conclusion: design research is great when it comes to improving existing product categories but essentially useless when it comes to new, innovative breakthroughs. ... Although we would prefer to believe that conceptual breakthroughs occur because of a detailed consideration of human needs, especially fundamental but unspoken hidden needs so beloved by the design research community, the fact is that it simply doesn't happen. ... grand conceptual inventions happen because technology has finally made them possible.

Create a company

MartinTobias-FastCompany-December2010 One recent articulator of irritation as inspiration is Martin Tobias, a serial entrepreneur and currently CEO of Tippr, who was profiled in a December 2010 Fast Company article on Innovation Agents:

The one common thread throughout Tobias' entrepreneurial journey: a healthy dose of anger. With Imperium Renewables, Tobias was "personally pissed at the climate damage that oil companies were doing,” he says. “When I started Kashless, I was personally pissed that my friends in the local bar and restaurant business didn’t have effective ways to use the Internet to get people to walk in the door to their businesses. I’m saving small businesses that are run by my friends. That’s an incredibly personal thing.”

That kind of righteous fury, according to Tobias, is the secret to any startup. “Find a problem that personally pisses you off and solve it, and you’ll be a good entrepreneur," he says. "The day that I wake up and I don’t have a hard problem to solve, I will stop being an entrepreneur."

PatientsLikeMe-logo The personal problem that motivated Jamie Heywood, Benjamin Heywood and their friend Jeff Cole to create PatientsLikeMe was the the struggle of their brother, Stephen Heywood, who was diagnosed with amyotrophic lateral sclerosis (ALS) in 1998. They developed a company and web platform to enable patients to share and learn from each others' experiences, and track the course of their condition and treatment(s), enabling them to tell their stories in data and words. The company recently expanded from its initial focus on 22 chronic conditions (including ALS, Parkinson's disease, HIV, depression, epilepsy, fibromyalgia, multiple sclerosis and organ transplants) to support patients suffering from any condition(s).

The story of the family's frustration - and response - also provided the inspiration for a movie, So Much, So Fast:

Made over 5 years, So Much So Fast tracks one family's ferocious response to an orphan disease: the kind of disease drug companies ignore because not there's not enough profit in curing it. In reaction, and with no medical background, Stephen's brother Jamie creates a research group and in two years builds it from three people in a basement to a multi-million dollar ALS mouse facility. Finding a drug in time becomes Jamie's all-consuming obsession.

As I get to know more Health 2.0 activists, advocates and platforms - some of whom I profiled in previous posts on social media and computer supported cooperative health care and platform thinking - and encounter more examples of their blessing, wounding, longing, loss, pain and transformation, I increasingly appreciate the innovative power of irritation ... especially when the source of the irritation is a matter of life and death.

In reviewing these examples, I am repeatedly reminded of the wisdom of Carl Rogers' profound observation:

What is most personal is most general.

There are, of course, many other ways that people channel their personal frustrations in innovative ways that benefit a more general population, and I would welcome the contribution of other inspiring examples in the comments below.

I will finish off with a video clip of the scene from the movie, Network, that I mentioned at the outset. It's interesting to note how many of the problems that contributed to Howard Beale's madness in 1976 are still - or again - prominent in today's world ... providing plenty of fodder for future innovation.


Social Media and Computer Supported Cooperative Health Care

Cscw2012-logo-100x100I've become increasingly aware of - and inspired by - the ways that social media is enabling platform thinking, de-bureaucratization and a redistribution of agency in the realm of health care. Blogs, Twitter and other online forums are helping a growing number of patients - who have traditionally suffered in silence - find their voices, connect with other patients (and health care providers) and discover or co-create new solutions to their ills. In my view, this is one of the most exciting and promising areas of computer supported cooperative work (CSCW), and in my role as Publicity Co-chair for ACM CSCW 2012 (February 11-15, Seattle) I am hoping to promote greater participation - in the conference - among the researchers, designers, developers, practitioners and other innovators who are utilizing social media and other computing technologies for communication, cooperation, coordination and/or confrontation with various stakeholders in the health care ecosystem.

Figure3-patient20 Dana Lewis, the founder and curator of the fast-paced, weekly Twitter chats on health care in social media (#hcsm, Sundays at 6-7pm PST), recently served as guest editor for an upcoming article on social media in health care for the new Social Mediator forum in ACM Interactions magazine. The article - which will appear in the July/August 2011 issue - weaves together insights and experiences from some of the leading voices in the use of social media in health care: cancer survivor, author and speaker "ePatient Dave" deBronkart promotes the use of technology for enabling shared decision-making by patients and providers; patient rights artist and advocate Regina Holliday shares her story of how social media tools are enabling her to channel her anger with a medical bureaucracy that hindered her late husband's access to vital information in his battle with cancer by writing on walls, online and offline; pediatrician Wendy Sue Swanson describes how she uses her SeattleMamaDoc blog for both teaching and learning in her practice of medicine; health care administrator Nick Dawson invokes the analogy of school in offering his perspective on the evolution of social media in health care, as it matures from freshman-level to graduate studies.

Spm_2010_logo In my social media sojourns, I've encountered many other inspiring examples of people, programs and platforms that are being used to empower patients to connect more effectively with information and potential solutions:

Cscw2011-logo-white It is important to note that health care has been an area of focus for CSCW in the past. For example, there was a CSCW 2011 sesssion on health care, and other papers on health care were presented in other sessions:

Cscw2010-logo There were also a number of health care presented at CSCW 2010:

There was also a CSCW 2010 workshop on CSCW Research in Health Care: Past, Present & Future with 21 papers.

My primary goal in this particular post is to increase awareness and broaden the level of participation among people designing, using and studying social media in health care. My most immediate goal is to alert prospective authors about the upcoming deadline for Papers and Notes - June 3 -  which has been moved earlier this year to incorporate a revision & resubmission phase in the review process, which was partly designed to accommodate the shepherding of promising submissions by authors outside of the traditional CSCW community who have valuable insights and experiences to share.

At some later phase, I'll start instigating, connecting & evangelizing other channels of potential participation, such as posters, workshops, panels, videos, demonstrations, CSCW Horizon (a category specially designated for non-traditional CSCW) and the doctoral colloquium. For now, I would welcome any help in spreading the word about the conference - and its relevance - to the health care social media community.


Innovation, Research & Reviewing: Revise & Resubmit vs. Rebut for CSCW 2012

cscw2012-logo Research is about innovation, and yet many aspects of the research process often seem steeped in tradition. Many conference program committees and journal editorial boards - the traditional gatekeepers in research communities - are composed primarily of people with a long history of contributions and/or other well-established credentials, who typically share a collective understanding of how research ought to be conducted, evaluated and reported. Some gatekeepers are opening up to new possibilities for innovations in the research process, and one such community is the program committee for CSCW 2012, the ACM Conference on Computer Supported Cooperative Work ... or as I (and some other instigators) like to call it, Computer-Supported Cooperative Whatever.

This year, CSCW is introducing a new dimension to the review process for Papers & Notes [deadline: June 3]. In keeping with tradition, researchers and practitioners involved in innovative uses of technology to enable or enhance communication, collaboration, information sharing and coordination are invited to submit 10-page papers and/or 4-page notes describing their work. The CSCW tradition of a double-blind review process will also continue, in which the anonymous submissions are reviewed by at least three anonymous peers (the program committee knows the identities of authors and reviewers, but the authors and reviewers do not know each others' respective identities). These external reviewers assess the submitted paper or note's prospective contributions to the field, and recommend acceptance or rejection of the submission for publication in the proceedings and presentation at the conference. What's new this year is an addition to the traditional straight-up accept or reject recommendation categories: reviewers will be asked to consider whether a submission might fit into a new middle category, revise & resubmit.

CSCW, CHI and other conferences have enhanced their review processes in recent years by offering authors an opportunity to respond with a rebuttal, in which they may clarify aspects of the submission - and its contribution(s) - that were not clear to the reviewers [aside: I recently shared some reflections on reviews, rebuttals and respect based on my experience at CSCW and CHI]. For papers that are not clear accepts (with uniformly high ratings among reviewers) - or clear rejects (uniformly low ratings) - the program committee must make a judgment call on whether the clarifications proposed in a rebuttal would represent a sufficient level of contribution in a revised paper, and whether the paper could be reasonably expected to be revised in the short window of time before the final, camera-ready version of the paper must be submitted for publication. The new process will allocate more time to allow the authors of some borderline submissions the opportunity to actually revise the submission rather than limiting them to only proposing revisions.

As the Papers & Notes Co-Chairs explain in their call for participation:

Papers and Notes will undergo two review cycles. After the first review a submission will receive either "Conditional Accept," "Revise/Resubmit," or "Reject." Authors of papers that are not rejected have about 6 weeks to revise and resubmit them. The revision will be reviewed as the basis for the final decision. This is like a journal process, except that it is limited to one revision with a strict deadline.

The primary contact author will be sent the first round reviews. "Conditional Accepts" only require minor revisions and resubmission for a second quick review. "Revise/Resubmits" will require significant attention in preparing the resubmission for the second review. Authors of Conditional Accepts and Revise/Resubmits will be asked to provide a description of how reviewer comments were addressed. Submissions that are rejected in the first round cannot be revised for CSCW 2012, but authors can begin reworking them for submission elsewhere. Authors need to allocate time for revisions after July 22, when the first round reviews are returned [the deadline for initial submissions is June 3]. Final acceptance decisions will be based on the second submission, even for Conditional Accepts.

Although the new process includes a revision cycle for about half of the submissions, community input and analysis of CSCW 2011 data has allowed us to streamline the process. It should mean less work for most authors, reviewers, and AC members.

The revision cycle enables authors to spend a month to fix the English, integrating missing papers in the literature, redoing an analysis, or adopt terminology familiar to this field, problems that in the past could lead to rejection. It also provides the authors of papers that would have been accepted anyway to fix minor things noted by reviewers.

This new process is designed to increase the number and diversity of papers accepted into the final program. Some members of the community - especially those in academia - may be concerned that increasing the quantity may decrease the [perceived] quality of submissions, i.e., instead of the "top" 20% of papers being accepted, perhaps as many as 30% (or more) may be accepted (and thus the papers and notes that are accepted won't "count" as much). However, if the quality of that top 30% (or more) is improved through the revision and resubmission process, then it is hoped that the quality of the program will not be adversely affected by the larger number of accepted papers presented there ... and will actually be positively affected by the broader range of accepted papers.

I often like to reflect on Ralph Waldo Emerson's observation:

All life is an experiment. The more experiments you make the better.

If research - and innovation - is about experimentation, then it certainly makes sense to experiment with the ways that experiments are assessed by the research communities to which they may contribute new insights and knowledge.

BeingWrongBook There is a fundamental tension between rigorous validation and innovative exploration. Maintaining high standards is important to ensuring the trustworthiness of science, especially in light of the growing skepticism about science among some segments of the public. But scientists and other innovators who blaze new trails often find it challenging to validate their most far-reaching ideas to the satisfaction of traditional gatekeepers, and so many conferences and journals tend to be filled with more incremental - and more easily validatable - results. This is not necessarily a bad thing, as many far-reaching ideas turn out to be wrong, but I increasingly believe that all studies and models are wrong, but some are useful, and so opening up new or existing channels for reviewing and reporting research will promote greater innovation.

I'm encouraged by the breadth and depth of conversations, conversions and alternatives I've encountered regarding research and its effective dissemination, including First Monday, arXiv and alt.chi. At least one other ACM-sponsored research community - UIST (ACM Symposium on User Interface Software & Technology) - is also considering changes to their review process; Tessa Lau recently wrote about that in a blog post at the Communications for the ACM, Rethinking the Systems Review Process (which, unfortunately, is now behind the ACM paywall ... another issue relevant to disseminating research). The prestigious journal, Nature, recently wrote about the ways social media is influencing scientific research in an article on Peer Review: Trial by Twitter.

I think it is especially important for a conference like CSCW that is dedicated to innovations in communication, collaboration, coordination and information sharing (which [obviously] includes social media) to be experimenting with alternatives, and I look forward to participating in the upcoming journey of discovery. And in the interest of full disclosure, one way I am participating in this journey is as one of the Publicity Co-Chairs for CSCW 2012, but I would be writing about this innovation even if I were not serving in that official capacity.

[Update: Jonathan Grudin, one of the CSCW 2012 Papers & Notes Co-Chairs, has written an excellent overview of the history and motivations of the revise and resubmit process in a Communications of the ACM article on Technology, Conferences and Community: Considering the impact and implications of changes in scholarly communication.]


Reflections on Reviews, Rebuttals and Respect

image from cscw2011.org image from chi2011.org Having recently served as associate chair for both the CSCW 2011 and CHI 2011 Papers & Notes Committees, I've read a large number of papers, an even larger number of reviews, and a slightly smaller number of rebuttals. In participating in back-to-back committees, a few perspectives and practices that impact the process of scientific peer review have become clearer to me, and I wanted to share a few of those here. I believe all of these boil down to a matter of mutual respect among the participants, and wanted to delve more deeply into some resources that offer guidelines for respectful practices.

TheFourAgreements I want to start out with a brief review of The Four Agreements, by don Miguel Ruiz, as I believe they provide a strong foundation for how to best approach the review process, as well as other areas of life and work (and I'll include links to earlier elaborations on three of the four agreements):

  1. Be Impeccable With Your Word: Speak with integrity. Say only what you mean. Avoid using the word to speak against yourself or to gossip about others. Use the power of your word in the direction of truth and love.
  2. Don't Take Anything Personally: Nothing others do is because of you. What others say and do is a projection of their own reality, their own dream. When you are immune to the opinions and actions of others, you won't be the victim of needless suffering.
  3. Don't Make Assumptions: Find the courage to ask questions and to express what you really want. Communicate with others as clearly as you can to avoid misunderstandings, sadness and drama. With just this one agreement, you can completely transform your life.
  4. Always Do Your Best: Your best is going to change from moment to moment; it will be different when you are healthy as opposed to sick. Under any circumstance, simply do your best, and you will avoid self-judgment, self-abuse and regret.

I see examples of these agreements being violated throughout all aspects of the review process. Reviewers say hurtful things about authors, their work and/or their papers in their reviews and/or online discussions. Some reviewers appear personally offended that authors would have the audacity to submit a paper the reviewers judge to be unworthy. Many reviews reflect implicit or explicit assumptions the reviewers are making about the paper, the work described by the paper, and/or the authors who have written the paper. Some reviews are so short that I have a hard time believing that the reviewers are really doing their best in fully applying their skills and experience to help us make the best possible decision on a paper (but I acknowledge this is an assumption).

image from upload.wikimedia.org Another framework that I believe is helpful to apply in this context is nonviolent communication (NVC), which is predicated on the assumption that everything we do is an attempt to meet our human needs, that conflict sometimes arises through the miscommunication of those needs, and that further conflict can be avoided by refusing to use coercive or manipulative language that is likely to induce fear, guilt, shame, praise, blame, duty, obligation, punishment, or reward. The Wikipedia entry for nonviolent communication offers four steps (that are very similar to some earlier distinctions I'd written about between data, judgments, feelings and wants):

  • making neutral observations (distinguished from interpretations/evaluations e.g. "I see that you are wearing a hat while standing in this building."),
  • expressing feelings (emotions separate from reasons and interpretation e.g. "I am feeling puzzled"),
  • expressing needs (deep motives e.g. "I have a need to learn about other people's motives for doing what they do") and
  • making requests (clear, concrete, feasible and without an explicit or implicit demand e.g. "Please share with me, if you are willing, your reasons for wearing the hat in this building.")

Drawing on both of these sources for inspiration, ideally, a well-written review would have the following characteristics:

  • Focus on the paper, vs. the underlying work or the authors. All comments address [only] what is written in the paper. They should not address the work described by the paper or the authors who have written the paper. In a blind review process, reviewers typically do not have first-hand knowledge of the work described in the paper beyond what is written; reviewers who do have first-hand knowledge should recuse themselves due to a conflict of interest (i.e., they were co-authors or collaborators on the work). Thus, any comments about the work (vs. what is written about the work) are based on assumptions.
  • Follow the principles of non-violent communication (NVC). In particular, use "I" statements wherever possible, and void any direct references to the authors. For example, rather than saying "You don't say how you do X", an NVC phrasing might be something more like "It is not clear to me from the paper how X was done", or rather than saying "Why didn't you do X?", re-phrasing this as "I believe this or a future paper would be strengthened if it included X, or at least a compelling argument as to why X was not done".
  • Be compassionate and generous. Assume that the authors were doing their best in composing the paper, and look for reasons to accept in addition to reasons to reject (the latter usually being more readily identified by people trained in critical thinking). I was particularly inspired by the use of generosity in the directives issued by the CHI 2011 Papers & Notes Chairs at the committee meeting. Perhaps it's the proximity to the holiday season, but I found the use of that term more resonant throughout the meetings than the more traditional (and technical) "reasons to accept" that are often promoted by chairpersons.
  • Reverse the golden rule. The golden rule is "Do unto others as you would have them do unto you". A variation on this theme - which I first encountered in a book about positive psychology called How Full is Your Bucket? - is "Do unto others as they would have you do unto them." Particularly in a multi-disciplinary conference, different norms may be at work. I've had some strong disagreements with reviewers who are used to receiving terse and potentially offensive reviews, who implicitly apply the golden rule and figure if they can take it in reviews of their own papers, so should the authors whose papers they are reviewing. I always try to convince them to break the cycle of violent communication, with varying degrees of success. In a blind review process, of course, reviewers don't know the identities of the authors, and so can't really know how they would "have you do unto them". But I believe it is best to err on the side of nonviolence.

The rebuttal process also offers an opportunity for applying these practices. I won't go into as many details about the rebuttals, but I will say that if there was a category for "best rebuttal" (along the lines of "excellent reviews" and "best paper awards"), I saw two rebuttals among the papers we discussed that were outstanding exemplars of effective rebuttals. These had several factors in common:

  • a heartfelt expression of gratitude for the constructive feedback provided by the reviewers (and the reviews for these submissions were excellent)
  • the correct, gracious and effective identification of misinterpretations by reviewers, and a gentle articulation of the intended interpretation
  • an honest acknowledgment of correctly identified errors or omissions by the reviewers, and an explicit statement of how these would be addressed in a revision (if accepted)

I also witnessed some angry rebuttals, some of which included disparaging remarks about the committee and/or the conference community, none of which had any positive influence on the ultimate decision made on those papers. I won't go into any further details, as I do not believe that would be constructive. However, I would encourage all authors to wait at least 24 hours after they recieve their reviews to even start composing their responses, as I believe this will lead to a more constructive engagement.

Due to the desire to respect confidentiality agreements, I won't disclose any specific reviews or rebuttals from the CSCW or CHI conferences as positive or negative examples, but I will conclude with a few rather extreme examples of negativity - which are so extreme they are humorous - in a blog post on Twisted Bacteria about peer review of scientific papers:

  • This paper is desperate. Please reject it completely and then block the author’s email ID so they can’t use the online system in future.
  • The biggest problem with this manuscript, which has nearly sucked the will to live out of me, is the terrible writing style.
  • The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • The finding is not novel and the solution induces despair.

There are several more examples of violations of The Four Agreements and the principles of nonviolent communication available at Twisted Bacteria, and I'm grateful that the reviews I've seen (and written) in the CSCW and CHI communities do not reflect the extreme expressions found in this selection from the environmental microbiology community.

I hope that highlighting some of the more positive and constructive approaches one might take to peer reviewing (and rebutting) will promote a more mindful, respectful and effective process for all participants.


Virtual Reality, Somatic Cognition, Homuncular Flexibility and Object-Centered Sociality and Learning

VirtualReality Jaron Lanier recently wrote about virtual reality and its potential application to learning, utilizing some evocative terms and offering an educational scenario that reminds me of a seminal 1997 paper that described how a Nobel prize-winning biologist fused with her objects of study. The Saturday Wall Street Journal article gave me a keener appreciation for the potential applications of virtual reality (VR) - immersive computer-generated environments that model real or imaginary worlds - and for the pervasiveness of object-centered sociality, a concept I first encountered via Jyri Engestrom.

Crane-sm6 Lanier's article is about new frontiers for avatars - "movable representations of ourselves in cyberspace" - and how they can be used to manifest somatic cognition - the mapping of human body motion "into a theater or thought and strategy not usually available to us" in which one's hands (or presumably, other body parts) can solve complicated puzzles more quickly than one's head (or conscious mind). The examples he gives of somatic cognition outside the realm of virtual reality include professional musicians, athletes, surgeons and pilots, and I found myself thinking of a documentary I saw years ago on heavy machinery, and the way that a crane operator who was interviewed described the bewildering array of levers as virtual extensions of his arms and hands.

After describing a software bug in an early VR system that gave his humanoid avatar a gigantic hand, Lanier generalizes homuncular flexibility as a more general principle: "people can learn to inhabit other bodies not just with oddly shaped limbs [gigantic hands], or limbs attached in unfamiliar places, but even bodies with different numbers of limbs [lobster avatars]". Dean Eckles generalizes this notion even further - in a 2009 blog post reviewing a 2006 article by Lanier on homonucular flexibility (which offers more details about the lobster) - to distal attribution: our propensity for attributing sensory perceptions to internal or external - or proximal or more distant - sources.

However, it is Lanier's reference to an experiment with elementary school children being turned into the things they were studying that I found most interesting [although I have not been able to track down the reference]:

Some [students] were turned into molecules, dancing and squirming to dock with other molecules. In this case the molecule serves the role of the piano, and instead of harmony puzzles, you are learning chemistry. Somatic cognition offers an overwhelming emotional appeal for education, because it leverages vanity. You become the thing you are studying. Your sensory motor loop is modified to incorporate the logic of a science, and you develop body intuition about that logic.

This idea of fusing or becoming one with the object of study is one of the two primary manifestations of object-centered sociality articulated in Karin Knorr Cetina's seminal paper, "Sociality with Objects: Social Relations in Postsocial Knowledge Societies", [Theory, Culture & Society, 1997, Vol. 14(4):1-30]. As I noted in an earlier post on place-centered sociality, the other manifestation of object-centered sociality - sociality (interactions and relationships) through objects, such as online photos, videos or even blog posts - is better known, at least among many of those who study online social media (and mediation). But Lanier's article evokes the manifestation of sociality with objects themselves, reminding me of what I earlier wrote about Knorr Cetina's articulation of how this can promote deeper investigation and learning:

[Knorr Cetina] looks specifically at knowledge objects, and how they are increasingly produced by specialists and experts rather than through a broader form of participatory interpretation. She argues that experts' relationships with knowledge objects can be best characterized by a the notion of lack and a corresponding structure of wanting [emphasis hers] because these objects "seem to have the capacity to unfold indefinitely": new results that add to objects of knowledge have the side effect of opening up new questions. This perpetual unfolding gives rise to "a libidinal dimension or dimension of knowledge activities" - an "arousal" and "deep emotional investment" - by the person studying the knowledge object. As an example, she describes the way that biologist Barbara McClintock, who won the Nobel Prize for her discovery of genetic transposition, would totally immerse herself in her study of plant chromosomes, identifying with the chromosomes and imagining how they might see the world - evoking an image (for me) of object-centered empathy more than sociality.

Kinect The prospect of empowering future Nobel laureates with virtual reality technology to engage with and virtually embody objects of knowledge at an early age is very exciting. Lanier mentions the Kinect camera for Xbox 360 made by Microsoft (his employer), which will likely put virtual reality technology in the hands (or homes) of millions of people in the near future.

The primary emphasis of Kinect marketing is on fun and games, but based on Lanier's article, and Knorr Cetina's insights into object-centered learning, Kinect might also provide a platform for a new approach to education. In an ideal world, of course, fun and learning would not be such distinct concepts ... perhaps this new technology will help promote a new dimension of convergence in the not-too-distant future.