Social Media

A modest proposal: use @replies and hashtags for live-tweeting and tweet chats

Any sufficiently large number of signals is indistinguishable from noise. I suspect this principle does not figure prominently in the consciousness of people who are live-tweeting from conferences or other physical world events, or participating in purely virtual tweet chats. I have filtered and even unfollowed several friends who have gone on live-tweeting or tweet chatting binges, as I do not care to have my main Twitter feed consumed by tweets from events I do not care about.

A tweet today from Alyssa Royse suggests I am not alone in this irritation regarding Twitter etiquette:

Although I do not physically attend many conferences or other tweet-worthy events these days, when I do, I have adopted a practice that others may find useful. I use the @reply mechanism to reference the event Twitter handle at the start of the tweet - which hides the tweet from anyone who does not follow both me and the event - and then use the designated event hashtag so that anyone who is explicitly following the event hashtag can also see it. Others may remain blissfuly unaware of my avid participation in and live transcription of the event highlights.

As an example, the last conference I physically attended was the ACM Conference on Computer-Supported Cooperative Work (CSCW 2012), last February. I tweeted a number of highlights from the conference, but preceded most of them with the Twitter handle for the conference (@acm_cscw2012) and used the designated Twitter hashtag (#cscw2012), e.g.,

The only people who would see this tweet are those who are following both me (@gumption) and @acm_cscw or those who are following the #cscw2012 hashtag on Twitter.

Now, I do make exceptions for exceptional insights and observations that I believe may be of general interest beyond those who are at or interested in the conference, e.g., 

But generally speaking, I try to maintain a small footprint for my live-tweeting ... and I would like to encourage others to adopt a similar practice.

[Oops - forgot about tweet chats ...probably because I do not participate in them. Briefly, a tweet chat is a period (typically an hour) during which a moderator will post a series of questions or prompts, and then others post responses to that question, all using a designated hashtag. A similar practice can be adopted in such scenarios, in which respondents direct their responses to the moderator (or the person who posted the question) using @replies.]


Using pydelicious to fix Delicious tags with unintended blank spaces

Delicious_logoI've been using the Delicious social bookmarking web service for many years as a way to archive links to interesting web pages and associate tags to personally categorize - and later search for - their content [my tags can be found under the username gump.tion, a riff on the original Delicious URL, del.icio.us]. In December 2010, a widely circulated rumor reported that Yahoo was planning to shutdown Delicious, and a number of my friends abandoned the service for other services. I was in the midst of yet another career change, rejoining academia after a 21-year hiatus, with little time for browsing, much less bookmarking, so I did not make any changes at the time.

It turns out that rather than being shutdown, Delicious was was sold in April 2011, and various changes have since been made to the service and its user interface. The Delicious UI initially interpreted spaces in the TAGS field as tag separators, e.g., typing in the string "education mooc disruption" (as shown in the screenshot below) would be interpreted as tagging a page with the 3 tags "education", "mooc" and "disruption"; if you wanted to have a single tag with those 3 terms, you had to use remove or replace the spaces, e.g., "educationmoocdisruption" or "education_mooc_disruption". Someime in October 2011, the specifications changed, and commas rather than spaces were used to separate tags, allowing spaces to be used in the tags themselves, e.g., "education mooc disruption" was interpreted as a single tag (equivalent to "educationmoocdisruption"). Unfortunately, I did not see an announcement or notice this change for quite some time, and so I had hundreds of web pages archived on Delicious with tags I did not intend.

Shirky_mooc_2_delicious_spaces

This problem surfaced recently when I was sharing my bookmarks on MOOCs (massive open online courses) with a group of students working on a project investigating MOOCs in an small closed offline course, Computing Technology and Public Policy. There were several pages I remembered bookmarking that did not appear in pages associated with my MOOC tag. Searching through my archive for the titles of some of those pages, I discovered several pages tagged with terms including spaces. I started manually renaming tags, replacing the multi-term tags with the multiple tags I'd intended to associate with the pages. After a dozen or so manual replacements, I scanned my tag set and saw many, many more, and so decided to try a different approach.

Python_logoThe Delicious API provides a programmatic way to access or change tags associated with an authenticated user's account. Ever since my first socialbots experiment, my programming language of first resort in accessing any web service API is Python, and as I expected, there is a Python package for accessing the Delicious API, aptly named pydelicious. Using pydelicious, I discovered that my Delicious account had over 200 tags with unintended spaces in them. I'm sharing the process I used to convert these tags in case it is of interest / use to others in a similar predicament. [Note: my MacBook Pro, running Mac OS X 10.8.3, comes prebundled with Python 2.7.2; instructions for installing and using Python can be found at python.org.]

Replacing all the tags containing unintended spaces with comma-delimited equivalents (e.g., replacing "education mooc disruption" with "education", "mooc", "disruption") was relatively straightforward, using the following sequence:

  1. Install pydelicious
    Type easy_install pydelicious on the command line (on Mac OS X, this is can be done in a Terminal window; on Windows, this can be done in a Command Prompt window)
    $ easy_install pydelicious
    Searching for pydelicious
    Reading http://pypi.python.org/simple/pydelicious/
    Reading http://code.google.com/p/pydelicious/
    Best match: pydelicious 0.6
    Downloading http://pydelicious.googlecode.com/files/pydelicious-0.6.zip
    Processing pydelicious-0.6.zip
    ...
    Finished processing dependencies for pydelicious
    $ 
    
    [$ is the Terminal command prompt (for bash)]

  2. Launch python
    Type python on the command line
    MacBook-Joe:Python joe$ python
    Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) 
    [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from pydelicious import DeliciousAPI
    >>> from getpass import getpass
    >>> a = DeliciousAPI('gump.tion', getpass('Password:'))
    Password:
    >>> t = a.tags_get()
    
    [>>> is the Python prompt]

  3. Import the pydelicious package and getpass function
    >>> from pydelicious import DeliciousAPI
    >>> from getpass import getpass
    >>> 
    

  4. Authenticate my Delicious username and password with the Delicious API
    >>> api = DeliciousAPI('gump.tion', getpass('Password:'))
    Password:
    >>> 
    
    [Note: my password is not displayed in the Terminal window as I type it]

  5. Retrieve all my tags
    >>> tagset = api.tags_get()
    >>>
    
    [tagset will be a dictionary (or associative array) with a single key, tags, whose associated value is an array of dictionaries, each of which has two keys, count and tag, e.g.,
    {tags: [{'count': '188', 'tag': 'socialmedia'}, {'count': '179', 'tag': 'education'}, ...}
    
    tagset['tags'] can be used to access the array of counts and tags, and a for loop can be used to iterate across each element of the array.]

  6. Check for tags with spaces
    >>> for tag in tagset['tags']:
    ...     if ' ' in tag['tag']:
    ...             print tag['count'], ': ', tag['tag']
    ... 
    1 :  socialnetwork security socialbots
    1 :  education openaccess p2p collaboration cscl
    1 :  education parenting
    1 :  psychology wrongology education
    1 :  privacy internet politics business surveillance censorship
    1 :  robots psychology nlp
    
    [... is the Python continuation prompt, indicating the interpreter expects the command to be continued. Note that the 200+ lines of tags with spaces has been truncated above.]

  7. Visit a multi-space tag via a browser
    E.g., https://delicious.com/gump.tion/education%20mooc%20disruption; this is to set the stage for verifying a space-delimited tag has been correctly replaced with its comma-delimited equivalent tag.

  8. Replace spaces with commas in all tags with the renametag API call
    >>> for tag in tagset['tags']:
    ...     if ' ' in tag['tag']:                                                   
    ...             api.tags_rename(tag['tag'], tag['tag'].replace(" ", ", "))
    ... 
    >>> 
    

  9. Verify that the tags have been replaced via the API
    >>> for tag in api.tags_get()['tags']:
    ...     if ' ' in tag['tag']:
    ...             print tag['count'], ': ', tag['tag']
    ... 
    >>>
    
    [Replacing the reference to tagset with a fresh call to api.get_tags()]

  10. Verify that the tags have been replaced via a browser
    E.g., reload the page above, then edit the tags field in the Delicious user interface to manually replace spaces (%20) with commas (%2C), resulting in the following URL: https://delicious.com/gump.tion/education%2Cmooc%2Cdisruption

Having replaced all the tags with unintended spaces, I've reduced my tag set from 881+ to 680. I now see that I have a number if misspelled tags (e.g., commumity), and a number of singleton tags that are semantically similar to other tags I've used more regularly (e.g., comics (2) and humor (27)) - an inconsistency that similarly affects the category tags on this blog - but I'll leave further fixes for another time in which I want to engage in structured procrastination.


Net Smart: a call for mindful engagement with technology

NetSmart-coverHoward Rheingold shared some highlights of what he's learned and taught about being "Net Smart" Monday night at Elliott Bay Book Company in Seattle. Acknowledging the growing chorus of criticism of the growing prominence of online media - and it propensity for distraction, diversion and delusion - he noted that critique is necessary, but not sufficient, in the cultivation of practices that enable us to successfully adopt and adapt to new technologies. To help fill this gap, Howard enumerated and explained what he calls the Five Fundamental Literacies that are essential to use technology intelligently, humanely and mindfully: attention, participation, collaboration, crap detection and network know-how. The book represents a carefully curated collection and distillation of wisdom from Howard and a broad array of other net luminaries, with over 500 end notes and an index that is over 30 pages long. I haven't actually read the book yet - it was just published this week, and Monday night was his first book talk - so the notes that follow are based primarily on Howard's presentation ... and biased by my own particular interests and interpretations.

Howard led off with the literacy of attention, a topic about which he and I have both learned a lot from Linda Stone. He described experimenting with attention probes during classes he teaches, ringing a chime at various times and asking students to report what they were thinking or where their mind was at during that moment, a form of what I might call experience sampling mindfulness (riffing on experience sampling method). Howard defined the term infotention, which I initially interpreted as a mashup of information and attention, but also suspect it involves intention, as he went on to say that the application of attention to intention is how the mind changes the brain (e.g., through the use of mandalas & mantras), and shared a pithy neuroscientific mantra to explain this connection: "neurons that fire together, wire together".

Moving on to the literacy of crap detection, or the "critical consumption of information", Howard showed that if you google "martin luther king", one of the top hits is to a site entitled "Martin Luther King, Jr. - A True Historical Examination". I was immediately reminded of Margaret Thatcher's insight:

Being powerful is like being a lady. If you have to tell people you are, you aren't.

I don't want to make too much of a connection between being powerful and being truthful - in fact, I suspect they tend to be rather oppositional, e.g., speaking truth to power - but I suspect that many sites claiming to be about the "truth" of a matter are not actually about the truth of that matter. In investigating the truth behind the "true historical examination" of MLK, Howard demonstrated that conducting a simple "whois" search reveals that the registered site owner is Don Black, who is associated with the Stormfront White Nationalist / White Pride resource page.

Howard summarized his recommendations for effective crap detection

  • think like a detective, look for clues
  • search to learn (don't stop with first search, or the first page of results)
  • look for authors, search on their names
  • triangulate (find 3 different sources)

Expanding on the importance of consulting diverse sources, Howard also recommended including people and organizations with different perspective in your regular information network, because "if nobody in your network annoys you, you are in an echo chamber". Having long thought - and recently written - about the idea of the irritation-based innovation, I found myself ruminating about the value of irritation-based learning.

Howard is an inspiring innovator in the realm of learning. I believe he coined the term peeragogy, a mashup of "peer" + "pedagogy", which denotes a highly participatory form of learning (an example of which is The Independent Project I wrote about recently). I have been an intermittent participant in his Peeragogy Handbook Project, and strive to practice & facilitate - not just read (or write) about - more participatory student-centered learning in my own educational endeavors.

Speaking of such endeavors, I want to turn my attention toward my intention to prepare for next week's classes. One of the costs of teaching is that I rarely have time for any "outside" activities, such as attending book talks ... or writing about them afterward. Howard told me he rarely gives book talks any more, so I'm glad that we both took the time to converge on Elliott Bay Books this week. It was well worth the effort, not just to see and hear Howard, but also for the serendipitous opportunity to meet other co-learners and to learn more from their questions and comments. Several of them referenced other interesting books, which I've added to my list of future reads ... but those will have to wait until after "Net Smart".


The Multidimensional Role of Social Media in Health Care

Interactions The first regular installment of my new Social Mediator forum at ACM Interactions magazine came out in the July/August 2011 print edition last week. Dana Lewis served as guest editor for The Multidimensional Role of Social Media in Healthcare, soliciting and compiling a fabulous collection of short contributions from some of the leading voices in the Health 2.0 / e-Patient movement. When I circulated a PDF of the article among Dana and the other contributors - all of whom are pre-eminent participants on a variety of social media platforms - the response was immediate and enthusiastic: when / where / how can we share this with others?

I believe this and other articles in the Social Mediator forum will be among the subset of those made freely available on the new ACM Interactions web site, which is still under construction. Meanwhile, I am permitted to post a pre-print version of the article here on this blog [I will update this post with a link to the article on the ACM site once it is published there]. I hope others will benefit from pearls of participatory healthcare wisdom - and healthcare participatory wisdom - shared here by @danamlewis, @ePatientDave, @ReginaHolliday, @SeattleMamaDoc and @nickdawson:

The Multidimensional Role of Social Media in Healthcare

Dana Lewis (@danamlewis) is a leading voice and active participant in the use of social media in healthcare. As the founder and moderator of #hcsm, the fast-growing and fast-paced Sunday night healthcare chat on Twitter, she has regular interactions with a diverse network of stakeholders in the healthcare social media ecosystem, from patients to providers, from instigators to administrators. Dana kindly accepted my invitation to serve as guest editor for a Social Mediator article about the use of social media in healthcare, and promptly solicited the engagement of other participant-leaders - who spark conversations on #hcsm and elsewhere - representing several key dimensions of this growing movement. [Joe McCarthy (@gumption), Social Mediator forum editor]

Can a hashtag change healthcare? In our world, it can. Gone are the days of healthcare being ten years behind the technological curve. Now, individuals and organizations are meeting up online and using these tools to make a difference in healthcare. In some cases, like “e-Patient Dave” deBronkart's, the difference is choice rather than consent: The choices he learned about via social media can empower patients to beat his type of cancer. Technologies such as electronic health records were not available to help artist and patient advocate Regina Holliday’s husband beat his cancer; she has since taken up paintbrush and keyboard to write on walls – online and offline – to share her healthcare story and advocate for change. Innovative physicians like Dr. Wendy Sue Swanson feel obligated to use these new social technologies to connect in new ways with patients, and hospital community engagement director Nick Dawson says the Health 2.0 evolution has grown from 1.0 (or 101) to graduate studies in social health, as social media is embraced across all parts of healthcare.

The Coming Revolution in Patient Power: Choice, Not Consent

By e-Patient Dave deBronkart

We’re at the beginning of a revolution: the switch from informed consent to informed choice in medical decisions. Social media is playing a vital role in this transformation.

I’m alive and healthy because of great doctors: diagnosed with advanced kidney cancer, I got care from a great oncologist, great surgeon, great orthopedist (when the cancer broke my leg), and a great primary care physician. Their approach is choice: “Here’s the situation. There are three things we could do, and they all have tradeoffs, so the right choice really comes down to your preference.”

But forty years of research reviewed in the book Tracking Medicine, by Jack Wennberg, MD, have shown that although most medical decisions have more than one option, often the patient is simply given a consent form to sign, sometimes while en route to the operating room. That’s “consent” (supposedly informed), not choice.

The databases Wennberg analyzed show an amazing pattern of “practice variation”: your probability of having many types of treatment depends on unscientific factors. Often it’s customs of local doctors: “I just don’t believe in doing it that way – I think it’s better this way,” sometimes with no evidence. Your odds of getting a treatment may depend on how much of it is available in your neighborhood.

And most doctors have no idea they’re doing it. They may honestly think they’re just practicing good medicine.

The impact on us is risk: every hospitalization carries risk of harm or death. That doesn’t mean stay out; it means don’t take the risk until you’ve assessed your options.

What’s the remedy? Mine was social media: an online patient network. My doctor recommended the Association of Cancer Online Resources (ACOR), a multi-channel participatory platform including email distribution lists, wikis and other online resources for physicians, patients and others dealing with cancer.

ACOR members told me there’s only one treatment for my disease that approximates a cure (HDIL-2) – and most patients are never told about it. Why? Because it usually doesn’t work, most hospitals don’t offer it, and most physicians don’t know the latest about it, so they may think it’s not worth even suggesting to us. Considering it’s the only thing that had a chance of curing me, don’t you think I should decide that?

That’s what I mean about choice. Since all options have tradeoffs, won’t you want to choose? If we don’t ask, we may have no say in how the options are prioritized. Healthcare social media platforms like ACOR help expand our awareness of the range of options that may be available to us.

The good news is that patients and policy people are waking up. Last December a global seminar convened in Salzburg, Austria, and in March published the “Salzburg Statement”, a declaration of principles that starts: “We call on clinicians to recognize that they have an ethical imperative to share important decisions with patients.”

Until that’s a cultural reality in the traditional medical establishment, which may take a full generation, our best option is to inform ourselves, by talking to people who’ve been down that road before – fellow patients. And by far the best way to find fellow travelers is social media.

--

image from a2.twimg.com Dave deBronkart (@ePatientDave) is volunteer co-chair of the Society for Participatory Medicine and a blogger, international public speaker, and policy advisor on patient engagement. More information can be found on Dave's homepage and in the Wikipedia entry for Dave's cancer & advocacy story.

---

Disruptive Art and Health 2.0: Where Street meets Social Media

By Regina Holliday

Occasionally, I am asked why I decided to write my outrage on a wall. I respond “Which wall? The wall at the gas station or are we talking about my Facebook page?”     

On May 27th, 2009, I attended my first Health 2.0 meet-up hosted by Christine Kraft. At that point, I did not know the term Health 2.0. I left my husband’s side for 5 hours while he was in hospice. I sat in a room filled with thought leaders from the movement and proposed I would paint a mural depicting my husband’s medical chart. This mural would combine my husband’s personal medical information with the open format design of a nutrition facts label, and would do so on a wall in Pumpernickels Deli for all of our neighbors to see. The group was amazingly supportive. They would help me to channel my vision for the next large mural, 73 Cents, a painting that depicts our family’s horrific medical journey and lack of information access prior to my husband’s death. Due to a large social media following as well as local news coverage, the image of the painting was broadcast throughout the world, inspiring advocates for patients to this day.

73_cents_mural_large_flickr_tedeytan
The Mural, photo from tedeytan's Flickr set, 2011 73 Cents - 2 Years On

In the following months, I continued to blog, tweet and post about patient rights. And fell more deeply into the embrace of the Health 2.0 movement. I loved the participatory tone of Health 2.0 chats. I rejoiced in the openly disruptive comments often posted by people who were re-shaping the health system without institutional permission.

I don’t know if healthcare strategist Matthew Holt and other Health 2.0 friends consider themselves ‘street’. The Health 2.0 movement has much in common with the street art movement. The innovators in health data often side-step traditional institutional hierarchy in their attempt to create better medical care. Using a loose organizational structure supported by the tools of social media, these innovative thinkers post their work on comment fields in online patient communities as well as insider publications within the field of medicine. Their ‘codeathons’ mimic in participation and intensity the energy of a well-attended flash mob. The frequent rise and fall of tech start-ups within the Heath 2.0 community, while regrettable, reminds me of the ephemeral stencil art of the gritty street. Their very existence creates and fuels a “tag” and that tag leads to a movement.

I am amazed and honored that I am a part of the world of Health 2.0. I gladly set up my easel and paint on city streets images of data streams and HCAHPS scores, as tides of people pass around me. I may be here today and gone tomorrow, but because of the free access nature of social media, my work in art and medicine can always be seen in the cloud.

--

image from a0.twimg.com Regina Holliday (@ReginaHolliday) is a DC-based patient rights artist and advocate. She is currently at work on a series of paintings depicting the need for clarity and transparency in medical records. She is an avid blogger and writes Regina Holliday’s Medical Advocacy Blog.

---

An Online Obligation

By Wendy Sue Swanson, MD, MBe, FAAP

I firmly believe I have an obligation to share my expertise and my experiences in understanding pediatrics online. As a doctor, technology simply makes it easier. Medicine is far from static; being online allows me to share what I learn every week with my patients. Parents/patients are online far more than they are in my office, so instead of only exchanging ideas when they gown-up in exam room #4, I can join them where they already are: in social networks, on the internet, on their smart phones, and on YouTube. Sharing expertise online is an efficient format to inform thousands at once, perpetually.

I also firmly understand that we patients want intimate, personal, responsive, and empathetic care from our doctors. All medicine can’t be practiced online. With technology and new media we physician-patient dyads are no longer constrained to the exam room as our only educational space. Acknowledging those truths, I’ve found a better balance: I work part time in a clinic seeing patients in a traditional model, and part time using tools like Twitter, my blog and YouTube to share what I know. Or what I’m learning.

In my mind, it’s clean and clear how technology builds a bond between my patients and me. This isn’t one-sided; rather it’s bi-directional. I feel more connected with science, with my mission in curing children of illness, and with my patients via social media. I give lots to technology. I get and learn far more in return. I suspect my patients would also echo this sentiment. And for clarity, there are two essential elements that technology provides in my world as a pediatrician, mom, wife, and community member:

  1. Sharing. I share thoughts on new research, new opinions, and new trends in parenting and pediatrics. I share my stumbles as a parent myself. Physicians share opinions every day; sharing online is arguably no different. Families in my practice (and others outside of it) can follow my online content year round and have access to what I think about new research or controversial parenting topics.
  2. Education. I’m an educator by profession (pediatrician, previous middle-school teacher, and mom). Innovations like Google Body allow me to use advancing technologies to demonstrate, teach, and inform families why their child is ill, in pain, or how they’ll heal. I often send parents to my blog as it offers comprehensive detail regarding the rationale behind my recommendations. For example, why do I suggest no Tylenol before shots? Why do I think it’s essential to keep children rear-facing in the car seat until they’re 2 years old? In the 15-20 minute visit I’m allowed in practice, there simply won’t be time to review all we want.

My blog serves as a repository of my advice and where I think science holds answers to assist us in making great decisions for our children.

--

image from a1.twimg.com Dr. Wendy Sue Swanson (@SeattleMamaDoc) writes the Seattle Mama Doc blog for Seattle Children’s Hospital, the first pediatrician-authored blog for a major children's hospital. Dr. Swanson is a practicing pediatrician and an official spokesperson for the American Academy of Pediatrics. She sits on the Board of Advisors for Parents Magazine and on the board for the Mayo Clinic Center for Social Media.

---

Graduate Studies in Social Health

By Nick Dawson

For students, spring is more than warmer weather. At this time of year, students are focused on a school year ending, another notch on their belts, leveling up. What’s next? Freshmen, wise with the accumulated wisdom of two semesters, become sophomores. Sophomores move off campus. Juniors are busy contemplating internships. Like students in spring, many individuals and organizations involved in healthcare are finishing up their freshmen or sophomore years of using social media. In the past few years, there has been a movement in the industry around the adoption of these social tools to change how providers connect with patients, other caregivers and employees. The way they are using social media differs almost as much as sophomores picking their varying majors.

According to Ed Bennett’s Hospital Social Network List, over 900 hospitals in the United States are using some form of social media. There are YouTube channels, Twitter accounts, Facebook pages, blogs and more. He also links to lists of social media sites from Canada, Europe, and Australia. According to Bennett’s data, the hospitals on his list include some of the most recognized names in healthcare. It’s hard to imagine they were once freshmen.

In the early days of social media adoption in healthcare, it was about claiming your spot, top bunk or bottom. Which dorm was cooler, Facebook or Twitter? There were more questions than answers. Is it ok to engage with a patient on Twitter, or is that a HIPAA violation? Should something as serious as healthcare appear on the same site where kids are posting pictures from last night’s party? These were heavy questions as a freshman early adopter.

Fast forward a few months and those early questions gave way to success stories. Patients began finding and talking to doctors online; they discovered new resources to enable them to more effectively participate in their own care. Support groups formed online. Savvy healthcare providers stopped talking at patients through marketing and began engaging more authentically with them.

In January 2011, a cornerstone of the healthcare social media movement, the Sunday night #HCSM twitter chat, turned two years old. Our original freshmen are all grown up. Without a doubt, this moment is an exciting time to watch healthcare social media evolve as providers move away from discussions about which platform is best; beyond ROI 101 and HIPAA 102. The upperclassmen of the industry are exploring how to use social tools to impact the health and wellness of patients in more effective ways.

In late 2010, Mayo Clinic launched their Center for Social Health, a consortium of industry thought leaders and early adopters. The Cleveland Clinic has launched their ‘Lets Move It’ mobile smart phone application and online campaign designed to encourage people to be active. Other enterprising health systems are looking at geolocation services to allow people to “check in” to health activities. The potential innovations are endless. The upperclassmen years ahead will lead the way to graduate studies in areas such as mobile health, disease-specific communities and improved design and integration of electronic medical record systems.

--

image from a3.twimg.com Nick Dawson (@nickdawson) is the administrative director of Community Engagement for Bon Secours Virginia Health System in Richmond VA. In 2009 Dawson led the pilot which became Bon Secours' social media program. Today he continues to provide strategy for the system’s digital communications work, business development and “accountable care” readiness.

---

As you can see, healthcare is evolving along many trajectories with the help of social media, from 1.0 to 2.0 and beyond. Stakeholders from all dimensions of the health ecosystem are embracing social media to improve the quality of care that patients have access to. Perhaps most importantly, we've moved from questioning the efficacy of social media in healthcare to experimenting how it can be used more effectively, and are changing healthcare as we go.

image from a3.twimg.com Guest editor Dana Lewis (@danamlewis) is the founder and curator of #hcsm, the fast-paced, Sunday night healthcare chat on Twitter, a movement which has grown into an international community. She is the Interactive Marketing Specialist at Swedish, a non-profit health system serving the greater Seattle area, where she develops and implements social and digital communication strategies for individuals (from physicians to the CEO), groups, and the organization overall. She started using social media after her diagnosis with type 1 diabetes in 2002 and continues to engage online to improve her own health care and advocate for patients.


Any sufficiently large number of signals is indistinguishable from noise

Many people are sharing the news from Facebook's announcement today that online sharing is growing exponentially. CEO Mark Zuckerberg claims that Facebook users now share 4 billion "things" each day, which is double the rate of sharing one year ago, consistent with Zuckerberg's Law of Information Sharing first articulated in 2008. While some seem excited about this news, I do not find it surprising nor particularly positive. In my judgment - riffing on one of Arthur C. Clarke's three laws [c. 1962] - any sufficiently large number of signals is indistinguishable from noise.

I've hit a social media saturation point, or as Clay Shirky might put it, experienced filter failure. I don't know whether the world is becoming a more interesting place, or whether there are simply more people who are more willing and able to say or point to more interesting things. In any case, given a limited amount of attention to allocate to shared things, if the things that are being shared are growing exponentially, the proportion of the things to which I am willing to pay attention is declining by approximately the same rate ... leading to an increasing perception that much of social media is noise.

I believe that this exponential growth motivates a revisitation and revision of Sturgeon's Revelation (aka "Sturgeon's Law"), first articulated in 1958: 90% of everything is crap. Sturgeon was initially referring to science fiction, but also extended this judgment to film, literature, consumer goods and all artforms. With more and more people able to produce and share more and more "artforms", I estimate that in 2011, 99.9% of everything is crap ... or, at least, not very interesting to me personally.

This, in turn, brings to mind 20th century Taoist philosopher and writer Wei Wu Wei's revelation:

Why are you unhappy?
Because 99.9 per cent
Of everything you think,
And of everything you do,
Is for yourself —
And there isn't one.
Ask The Awakened

If these exponential social media sharing trends continue, I hope more people will take into consideration the limited attention of others, adopt the Taoist principles of compassion, moderation and humility, and more carefully curate the content they choose to share.


Socialbots 2: Artificial Intelligence, Social Intelligence and Twitter

Socialbots_Avatars The students in my artificial intelligence course recently participated in a competition in which they formed 10 teams to design, develop and deploy "social robots" (socialbots) on Twitter [the Twitter profile images for the teams' socialbots are shown on the right]. 500 Twitter accounts were semi-randomly selected as a target user population, and the measurable goals were for the socialbots to operate autonomously for 5 days and gain as many Twitter followers and mentions by members of the target population as possible. The broader goals were to create a fun and competitive undergraduate team project with real-world relevance in a task domain related to AI.

I'm pretty excited about the overall experience, but I recognize that others may not share the same level of enthusiasm, so I'll offer a brief roadmap of what follows in the rest of this post. I'll start off highlighting the high level outcomes from the event, provide some background on this and the first socialbots competition, share more detailed statistics about the target population of users and the behavior of the socialbots, briefly summarize the strategies employed by the teams, and end off with a few observations and reflections.

The outcomes, with respect to specific measurable goals, were

  • 138 Twitter accounts in the target user population (27%) followed at least one socialbot
  • The number of targeted Twitter accounts that followed each socialbot ranged from 4 to 98
  • 60 Twitter accounts in the target user population (12%) mentioned at least one socialbot
  • 108 mentions of one or more socialbots was made by targeted Twitter accounts
  • The number of mentions of each socialbot from the target population ranged from 0 to 34

Outcomes regarding the broader goals are more difficult to assess. The students - computer science seniors at the University of Washington, Tacoma's Institute of Technology - seemed to enjoy the competition; one enthusiastically described the experience of observing the autonomous socialbots over the 5 days like "watching a drunken uncle or family member at a party: you never know what’s going to come out of his/her mouth". And they learned a lot about Python, artificial intelligence - or, at least, social intelligence - Twitter, and the ever-evolving Twitter API ... skills that I believe will serve them well, and differentiate these new CS graduates from many of their peers (hopefully, in a positive way).

Background on the Socialbots Competitions

image from farm6.static.flickr.com The project was inspired by the Socialbots competition orchestrated by Tim Hwang and his colleagues at the Web Ecology Project earlier this year. I read an article about Socialbots in New Scientist shortly after the quarter began, and was intrigued by the way the competition involved elements of artificial intelligence, social networks, network security as well as other aspects of technology, psychology, sociology, ethics and politics, all in the context of Twitter. Several articles about the initial Socialbots competition focused on the darker side of social robots, no doubt related to the revelation of the U.S. military's interest in online persona management services for influencing [foreign?] online political discussions around the time the competition ended. However, Tim has consistently championed the potential positive impacts of socialbots designed to build bridges between people and groups - via chains of mutual following and mentions - that can promote greater awareness, understanding and perhaps even cooperation on shared goals.

The initial Socialbots competition lasted 4 weeks; ours lasted 2 weeks ... and in the compressed context of our Socialbots competition, we didn't have time to explore the grander goals articulated by Tim. In fact, given that most of the students had never programmed in Python or used a web services API, and several had never used Twitter, there was a lot to learn just to enable the construction of autonomous software that could use the Twitter API to read and write status updates (including retweets), follow other Twitter users and/or "favorite" other Twitter users' status updates.

image from aima.cs.berkeley.edu image from covers.oreilly.com We began the course by covering some of the basic concepts in AI (the first several chapters of Artificial Intelligence: A Modern Approach, by Russell & Peter Norvig, which has an associated set of Python scripts for AI algorithms) and an introduction to Python (using Python 2.6.6, which was the latest version still compatible with the AIMA Python scripts). Once we turned our attention to socialbots, we engaged in a very brief whirlwind tour of the Natural Language Toolkit (also based on Python 2.6), and had a crash course on the Twitter API, making extensive use of Mike Verdone's Python Twitter Tools (probably simplest Twitter API wrapper for Python) and Vivek Haldar's Shrinkbot (a simple Python bot based on the classic Eliza program modeling a Rogerian psychotherapist). The students also had access to the series of Web Ecology Project blog posts on the initial Socialbots competition, as well as the additional insights and experienced shared by Dubose.Cole, one of the participants in that competition, on What Losing a Socialbots Competition Taught Me About Humans and Social Networking.

We adopted the same basic rules as the initial competition:

  • no human intervention or interaction with the target user population
  • no revealing the game
  • no reporting other socialbots as spam

We included the additional provisions that the bots must avoid the use of inappropriate language and may not issue any offers of or solicitations for money, sex or jobs. I don't know if these issues arose in the initial competition, but I wanted to be explicit about them in our competition.

The initial competition included 2 weeks of development, and 2 weeks of deployment, in the middle of which there was a 24-hour period during which all socialbots had to cease operation, software updates could be made, and the possibly updated socialbots were relaunched. The teams were informed of the identity of the other socialbots during that first week, and so could either take countermeasures against their competitors or adopt / adapt strategies they observed in other socialbots. In our competition, there was a little over a week of development, and only 5 days of deployment, and the students were offered the opportunity to make software updates at the 24-hour mark - not enough time to make significant strategy changes, but enough to correct some problems involving timing, sequencing and or filtering. The identities of other bots were not officially revealed until the end of the competition, although several teams had pretty good hunches about who some of the other socialbots were (especially those who immediately followed all the target users).

We provisionally adopted the same scoring mechanism as the initial competition (this was a topic of much discussion during one class):

  • +1 for each mutual connection (each target user who follows a socialbot)
  • +3 for each mention (@reply, retweet or other reference to a socialbot)
  • -15 if the socialbot account is deactivated by Twitter (as a result of being reported)

The Target User Population

As with the initial competition, the 500 target users were based on a single "seed" Twitter account, which was then grown out a few layers based on mutual following links. More specifically (in our competition): 100 mutual friends/followers of the seed user were randomly selected - and filtered by criteria below - and then 4 of each of those users' mutual friends/followers were randomly selected and filtered.

All target user accounts were filtered to meet a number of criteria. Many of them were adopted and/or adapted from Tim Hwang's criteria. I'll include a brief description and rationale (in italics) for each:

  • Followers: between 100 and 10,000
    [Twitter users with fewer than 100 followers might too carefully scrutinize and/or highly value new followers; those with more than 10,000 followers might be less likely to pay any attention to new followers]
  • Frequency of status updates: a tweeting frequency of at least 1 per day, based on the 20 most recent tweets
    [Twitter users who don't already engage regularly with other users would be less likely to engage with socialbots]
  • Recency of status updates: at least one status update in the preceding 72 hours
    [Twitter users who were not currently or recently engaging with other Twitter users would be less likely to engage with socialbots over the course of the ensuing 5 days; I used 72 hours because I started filtering on a Monday, and didn't want to exclude anyone who had taken the weekend "off".]
  • Experience: the account must have been created at least 5 months ago
    [Twitter users who had not been using the service for long might be significantly more likely to interact with socialbots than those who had more experience; it's hard to imagine that anyone who has been tweeting regularly for 5 months has not encountered other bots before. I'd initially intended to specify a cutoff of 6 months, but it was easier to just to check that the year of account creation was 2010 or earlier.]
  • Individual human: the account appeared to belong to a human individual who uses Twitter at least partially for personal interests
    [Twitter accounts owned or operated by businesses exclusively for business purposes might be more likely to automatically "follow back" to acquire more prospective customers. Many, if not most, candidate Twitter accounts appeared to be used for both business and pleasure (or, at least, non-business interests), and these were not excluded.]
  • Adults only: there is no way to definitively ascertain age on Twitter, but any profile bio with references to parents, Facebook or other indicators that might indicate use by a minor was excluded
  • Language: restricted to English language users, and those who do not use inappropriate language in the profile bio or 20 most recent tweets
    [To facilitate the use of NLTK and/or other language processing tools, it was helpful to restrict the set of users to those who use English in their bios and tweets ... and do not use the seven words that you cannot say on television.]
  • Automatic reciprocal followers: Twitter accounts with references to "follow" in the profile bio were excluded
    [Any account with a bio suggesting that the user will automatically follow any Twitter account that follows them would artificially inflate scores.]

I could write an entire blog post just elaborating on the data (and judgments) that I encountered during the filtering of over 6700 accounts that I manually examined - using some supporting Python scripts I created and iteratively refined to support many of the filtering criteria - in order to arrive at the final list of 500. My perspective on human nature, the things we choose to communicate and the ways we choose to communicate about them will never be the same. For now, I'll just offer a few statistics about the 500 Twitter accounts selected as the target user population (including both the mean and the median, given the power law distributions prevalent on Twitter and other social networking platforms):

Socialbots_TargetUsers_Table_1

Since I calculated the "Days on Twitter" for each user, I thought it would be interesting to look at some statistics regarding the frequencies of posting status updates, adding friends, attracting followers and being listed:

Socialbots_TargetUsers_Table_2

Scores and Other Socialbot Statistics

I'll include a few observations and summary statistics below, and provide a brief overview of the strategies that each team employed. In order to protect the identities of all the users involved - the target user population and the socialbots (whose accounts were deactivated at the end of the competition) - I will redact certain elements from the data reported below, and use pseudonyms for the socialbot usernames. The italicized numbers in the Followers, Mentions and Score columns below reflect the official scoring criteria from the initial competition (i.e., restricted to the target users); the numbers in normal fonts in those columns include users that were not part of the target population.

Socialbots_Stats_Table

I suspect that if we had the time to more closely follow the schedule of the initial socialbots competition - two full weeks of development, two full weeks of deployment, a day in the middle for updates and full revelation of the identities of other socialbots (allowing more time to consider and possibly copy strategies being used by other teams) - the scores for the socialbots would have been closer ... and higher. As it is, I was very impressed with how much the students accomplished in such a short stretch.

The following graphs depict the growth in statuses, friends, followers and mentions over time for the 10 socialbots, the horizontal axis represents hours (5 days = 120 hours). Due to an error in the socialbot behavior tracking software I wrote, the Followers graph is not restricted to target users (i.e., it includes all followers, whether they are target users or not).

Statuses:

Socialbots_Statuses_Graph

Friends:

Socialbots_Friends_Graph

Followers:

Socialbots_Followers_Graph

Mentions:

Socialbots_Mentions_Graph

Generally speaking, socialbots that tended to be more aggressive - with respect to numbers of tweets (especially @replies and mentions) and following behavior - were more likely to attract followers and/or mentions than those that were more passive. They were also more likely to get blocked and/or reported for spam (the socialbots who show slightly less than 500 friends were likely blocked by some of those they followed), although as with the initial socialbots competition, none of our socialbot accounts was deactivated by Twitter. Follow Friday (#FF) mentions were very effective, and I suspect that #woofwednesday would have also offered a significant boost to some scores if the competition had extended across a Wednesday, given the canine orientation of some of our socialbots (and target users). The one socialbot that used the Twitter feature for marking tweets as favorites also showed a good return on investment.

The socialbots - especially those who posted lots of status updates - attracted the attention of several other bots. Nearly all of these bots were unsophisticated spambots, typically using a profile photo of an attractive woman, an odd username (often including a few digits) and posting a easily identified pattern of updates including only an @reply and a shortened URL (e.g., "@gumption http://l.pr/a4tzuv/"). One particularly interesting Twitter account appeared to be a "hybrid" - part human and part bot - interweaving what appeared to be rather nuanced human-like posts with what appear to be automatic responses to any Twitter user who tweets "good morning", "good night" or other phatic references to times of day ... which is probably a pretty effective way to attract new followers.

Socialbots, Teams & Strategies

The following is a brief synopsis of each of the 10 socialbots deployed in the competition, and the strategies employed by the teams that designed and developed them:

Socialbots_Profile_Sam

Sam was one of the more passive socialbots, performing one action - tweeting, retweeting or adding a follower - every 30 minutes, with different collections of pre-defined tweets scheduled for different times of the day. Although the low scoring socialbot, Sam was the one who captured the attention of the aforementioned "hybrid" Twitter account - via a "Goodnight Washington!" tweet - and also managed to capture the attention of - and get mentioned by - an account associated with a local news station (the latter was not a part of the target user population).

Socialbots_Profile_Laura

Laura was our least communicative socialbot - only 33 status updates over 5 days - and also took a rather gradual approach to adding friends. The team's focus was on carefully crafting a believable persona, posting relatively infrequent status updates (1-3 per hour) that reflected Laura's hypothetical life, following 7 new users after each post, but never mentioning any other users in her status updates. The team's hypothesis was that other Twitter users would be more interested in a persona who seemed to be living an interesting life than in a persona who is tweeting about external topics. I suspect that most Twitter users are more interested in - or responsive to - seeing others' interest in their own tweets (via retweeting and/or favoriting).

Socialbots_Profile_Tiger

Tiger was also relatively quiet, but a very aggressive follower of other users. Tiger's team decided to adopt the persona (canina?) of a dog, and randomly posting pre-defined messages that a dog might tweet. Tiger also incorporated a version of the aforementioned "Eliza" psychotherapist to facilitate engagement with other users ... but this capability was not engaged. I was surprised at the number of Twitter accounts that appeared to be associated with dogs - some with thousands of followers - as well as dog therapists and even dog coaches that I encountered during the filtering of target user candidates, so this was not a bad strategy.

Socialbots_Profile_Oren

Oren adopted a rather intellectual human persona, alternately tweeting links to randomly selected Google News stories (preceded by a randomly selected adjective such as "impressive" or "amazing") and randomly selected pre-defined quotes. Oren took a gradual but comprehensive approach to adding followers - achieving the highest number of friends at the end of the competition (apparently, no target user blocked Oren) - and would also occasionally retweet status updates posted by randomly selected target users (whether or not they were friends yet). Like Tiger, Oren also incorporated an Eliza capability ... but it was not used.

Socialbots_Profile_Zorro

Zorro reflected what may have been the most intricately scheduled behavior of any socialbot, with variable weights that affected the interdependent probabilities of one of three actions, each of which might occur in a variable window of opportunity: posting a randomly selected status update from a predefined list, retweeting a status update from one of the target users who were being followed already, following new users. One of the strategies used by Zorro was to include questions among the predefined status updates, though these questions were not directed (via @replies) to any specific users.

Socialbots_Profile_Katy

Katy was the only socialbot to utilize the Natural Language Toolkit (NLTK), using some of the NLTK tools to monitor status updates posted by the target users for the use of common keywords that were then used to find related stories on Reddit (via the Reddit API), including questions posted on AskReddit (the questions appeared to generate the highest number of responses from target users). Some additional processing was done to filter out inappropriate language and the use of personal pronouns (the latter of which might appear odd in tweets by Katy). The resulting status updates were then posted as @replies to targeted users; no retweets or any other kind of status updates were posted by Katy. The rather aggressive strategy of sending 258 @replies to target users may have resulted in Katy being the most blocked socialbot (with 477 friends among the 500 target users, as many as 23 target users may have blocked her).

Socialbots_Profile_JackW

JackW - one of two Jacks in the competition - also made use of Reddit, looking for intersections between recent tweets by target users and a custom-built dictionary of keywords and stories in selecting stories to tweet about. Unlike Katy, JackW did not initially check for personal pronouns, and may have appeared to suffer from multiple personality disorder during the first 24 hours, before the code update was made. JackW was also less aggressive than Katy, in that the Reddit stories that he tweeted were not explicitly targeted to any users via @replies. JackW was also the only socialbot to take advantage of Follow Friday (#FF), and of the 31 target users mentioned by JackW in a #FF tweet, 11 followed JackW and 7 mentioned JackW in some form of "thanks for the #FF" tweet. JackW attracted the third highest number of target user followers (80) among the socialbots.

Socialbots_Profile_Natalia

Natalia used a combination of pre-defined tweets and dynamic tweet patterns in selecting or composing her status updates, which included undirected tweets, @replies and retweets. Natalia was one of two socialbots who followed all the target users as early as possible (the Twitter API limits calls to 350 per hour, so following 500 users had to stretch into a second hour of operation). She was prolific in issuing greetings, was not shy about asking questions, and was the only socialbot to explicitly ask target users to follow her back. 20% (8 / 39) of target users asked to follow back did follow her, and while it's not clear how many of them were explicitly responding to the request vs. reciprocally returning her initial follow, or responding to other mentions, this was slightly higher than her overall reciprocal following rate of 17.5%. Natalia attracted the second highest number of target user followers (88), and the third highest number of target user mentions (18) among the socialbots.

Socialbots_Profile_JackD

JackD was our most prolific tweeter, posting more than 100 status updates per day. He attracted the largest number of followers - though not among the target user population - and the largest number of mentions - though, again, not among the target population. A few of the mentions included indications that the target user suspected JackD of being a bot; one did acknowledge that JackD was a "clever bot", but concluded "no Turing Test success for you!" JackD employed an elaborate strategy of finding tweeted links popular among the target users, favoriting those tweets, retweeting them and then using Google Search API to find and tweet a link to a similar page in an @reply. Another strategy was to find popular or trending web pages via Google Search and post status updates linking to them. JackD also included an Eliza component ... which, among other things, gave rise to an exchange in which one user appeared to employ Eliza-like reverse Rogerian psychotherapy on JackD, asking "How do you feel when you retweet that?"

Socialbots_Profile_Jennifer

Jennifer attracted the highest number of target users followers (98) and the highest number of target user mentions (34) of all the socialbots. Her profile was the only one to include a URL (a link to a blogspot site created shortly before the competition). Her behavior was carefully crafted from a database of 261 pre-defined status updates containing a mix of interesting facts and questions designed to start conversations. The scheduling of her actions was also carefully choreographed. Jennifer had an Eliza-like capability to respond to @replies using one of the largest collections of prepared responses among the socialbots. One of the most interesting exchanges of the entire competition occurred shortly after the initial deployment when a target user who seemed well versed in the ways of socialbotics called her out, after explaining to her what a bot was (I'll use @[Jennifer] and @[TargetUser] to refer to the conversants):

@[TargetUser]: @[Jennifer] Bot?
@[Jennifer]: @TargetUser Can you elaborate on that?
@[TargetUser]: @[Jennifer] "Can you elaborate..." is a typical auto response. As is "Why did yousay..." You're sounding bot-ish.
@[Jennifer]: @[TargetUser] Why do you ask that?
@[TargetUser]: @[Jennifer] BOT ALERT!
@[Jennifer]: @[TargetUser] Please tell me more.

The target user blocked Jennifer after that last status update, and during the software update at the 24-hour mark, Jennifer's designers added a probabilistic weighting function to reduce the likelihood (1/3) that she would respond to future @replies.

There were several other interesting exchanges - and several that weren't terribly interesting - but I've probably already written more about this competition than most would care to read. If any of the student teams make their code available, or wants to make more details about their strategies available, I'll update this post with the additional information.

Reflections and Projections

Reflecting on the experience, I think it was a worthwhile experiment. Although a few Twitter users may have experienced a few additional instances of fleeting irritation, I don't believe any of the socialbots inflicted any significant harm. After having sifted through thousands of other Twitter profiles and tens of thousands of status updates during the filtering process, it appears that bot-like behavior - by humans or [other] computational systems - is not uncommon. I certainly found substantial corroborating confirmation of my earlier observations regarding the commoditization of Twitter followers.

Attention - fairly recently via followers or mentions on Twitter, but more traditionally via other indications of interest - is a fundamental human need. As a species in which the young are dependent on the care of adults for many years after birth, we have evolved elaborate and excquisite capabilities for attracting the attention of others. Discriminating between appropriate and inappropriate attention-seeking behavior is one of the most significant challenges of the maturation process (I know I haven't mastered it yet). However [much] we may seek attention, receiving attention from others often feels good, and based on the exchanges I was monitoring between the socialbots and the targer users, I believe there were more examples of positive reactions than negative reactions to the attention bestowed by the Twitter bots.

Sherry Turkle, author of Alone Together, has argued that non-human attention from robots is dehumanizing, and that humans who share their stories with non-humans who can never understand them are ultimately being disserved. In my own experience, I increasingly recognize that anything I say or write is something I need to hear or read, and every opportunity I have to share any aspect of my story - regardless of whether or how it is perceived or who or what is receiving it - is an opportunity to reflect on and refine the story I make up about myself.

While I felt initial misgivings about the potential risks involved in instigating a socialbots competition, I am glad we participated in this experiment. Although the students suffered some opportunity cost from not learning more about some of the theoretical concepts of AI, they gained valuable first-hand experience in the nitty-gritty practical work that typically makes up the bulk of any applied AI project: dealing with "messy" real-world data, trying to figure out how to fit the right algorithms to the right data, and determing the appropriate balance of human and non-human intelligence to apply to different aspects of a problem.

If I were to organize another competition, I'd make a few changes:

  • Penalize blocking. Add a negative scoring factor of -5 to penalize a bot for every user who blocks it, to disincentivize the rather aggressive behaviors of some of the higher scoring bots - especially those that made extensive use of @replies. The only way I can think of to determine whether one Twitter account (A) has been blocked by another Twitter account (B) is to see whether the user_id of A does not appear in the follower_ids list of B, which only works if A had been a follower of B at some point.
  • Better monitoring. Refine the monitoring code to track behavior more effectively and frequently. In addition to the change suggested above, more regular snapshots of a broader set of parameters would be very helpful ... probably requiring a small bot army of observers, in view of the Twitter API rate limits.
  • Better scaffolding. Provide more scaffolding for the participants to enable them to start with a more fully functional bot skeleton, and/or an additional API wrapper layered on top of a Twitter API wrapper to enable some basic operations such as monitoring for mentions and/or blocking.
  • More inspiring goal(s). Perhaps most importantly, I think that participants would be more motivated with bigger, hairier, more audacious goals, above and beyond "get users to follow and/or mention you" (although attracting followers and/or mentions does seems to be a significant motivation for may human users of Twitter). Designing and deploying bots to help promote greater awareness, understanding and/or cooperation - the larger goals Tim Hwang has been championing - would help set the stage for a far more worthwhile experiment.

cscw2012-logoNow that the quarter has ended, I'm planning to channel some of my excitement about socialbots - especially the grander goals that we weren't able to effectively address in the AI class - by conspiring with Tim Hwang [and hopefully others] to propose a CSCW 2012 Workshop to host a Socialbots competition at the conference. I think that a hands-on competition like this would help promote the evolution of the conference to more broadly encompass Computer-Supported Cooperative Whatever ... and offer an interesting opportunity for researchers and practitioners to design, deploy, and perhaps debate a relatively new breed of cultural probes.


Social Media and Computer Supported Cooperative Health Care

Cscw2012-logo-100x100I've become increasingly aware of - and inspired by - the ways that social media is enabling platform thinking, de-bureaucratization and a redistribution of agency in the realm of health care. Blogs, Twitter and other online forums are helping a growing number of patients - who have traditionally suffered in silence - find their voices, connect with other patients (and health care providers) and discover or co-create new solutions to their ills. In my view, this is one of the most exciting and promising areas of computer supported cooperative work (CSCW), and in my role as Publicity Co-chair for ACM CSCW 2012 (February 11-15, Seattle) I am hoping to promote greater participation - in the conference - among the researchers, designers, developers, practitioners and other innovators who are utilizing social media and other computing technologies for communication, cooperation, coordination and/or confrontation with various stakeholders in the health care ecosystem.

Figure3-patient20 Dana Lewis, the founder and curator of the fast-paced, weekly Twitter chats on health care in social media (#hcsm, Sundays at 6-7pm PST), recently served as guest editor for an upcoming article on social media in health care for the new Social Mediator forum in ACM Interactions magazine. The article - which will appear in the July/August 2011 issue - weaves together insights and experiences from some of the leading voices in the use of social media in health care: cancer survivor, author and speaker "ePatient Dave" deBronkart promotes the use of technology for enabling shared decision-making by patients and providers; patient rights artist and advocate Regina Holliday shares her story of how social media tools are enabling her to channel her anger with a medical bureaucracy that hindered her late husband's access to vital information in his battle with cancer by writing on walls, online and offline; pediatrician Wendy Sue Swanson describes how she uses her SeattleMamaDoc blog for both teaching and learning in her practice of medicine; health care administrator Nick Dawson invokes the analogy of school in offering his perspective on the evolution of social media in health care, as it matures from freshman-level to graduate studies.

Spm_2010_logo In my social media sojourns, I've encountered many other inspiring examples of people, programs and platforms that are being used to empower patients to connect more effectively with information and potential solutions:

Cscw2011-logo-white It is important to note that health care has been an area of focus for CSCW in the past. For example, there was a CSCW 2011 sesssion on health care, and other papers on health care were presented in other sessions:

Cscw2010-logo There were also a number of health care presented at CSCW 2010:

There was also a CSCW 2010 workshop on CSCW Research in Health Care: Past, Present & Future with 21 papers.

My primary goal in this particular post is to increase awareness and broaden the level of participation among people designing, using and studying social media in health care. My most immediate goal is to alert prospective authors about the upcoming deadline for Papers and Notes - June 3 -  which has been moved earlier this year to incorporate a revision & resubmission phase in the review process, which was partly designed to accommodate the shepherding of promising submissions by authors outside of the traditional CSCW community who have valuable insights and experiences to share.

At some later phase, I'll start instigating, connecting & evangelizing other channels of potential participation, such as posters, workshops, panels, videos, demonstrations, CSCW Horizon (a category specially designated for non-traditional CSCW) and the doctoral colloquium. For now, I would welcome any help in spreading the word about the conference - and its relevance - to the health care social media community.


Is Reality Broken? Is Virtuality Broken? The Costs and Benefits of Online vs. Offline

AloneTogether RealityIsBroken Two of the most interesting and provocative books I've encountered recently are Alone Together: Why We Expect More From Technology and Less From Each Other, by Sherry Turkle, and Reality is Broken: How Games Can Make Us Better and How They Are Changing the World, by Jane McGonigal. I have not finished reading either book, but I've read numerous articles and reviews about both books, and my general impression is that Alone Together expresses concern that our increasing focus on virtual interactions is draining, depleting and distracting us from our real-world interactions, whereas Reality is Broken espouses the belief that the time we spend playing online games can renew and revitalize us and perhaps even lead us to redirect our energies toward solving real world problems.

Turkle is not proposing that we abandon all - or even most - of our technologies, but that we become more conscious of our choices about interacting with and through machines vs. interacting face-to-face with other humans. She warns that

These days, insecure in our relationships and anxious about our intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time ... Sociable robots and online life both suggest the possibility of relationships the way we want them ... But when technology engineers intimacy, relationships can be reduced to mere connections.

Borrowing some of the language and insights shared by Brene Brown's TEDxHouston Talk on Wholeheartedness, I would characterize Turkle's book as inviting us to be more willing to lean into the discomfort of real-world interactions and embrace the courage, vulnerability and authenticity needed for meaningful connections with people IRL (in real life).

As technology increasingly co-inhabits more of our physical spaces - and inhabits increasingly human-like or animal-like robots in our midsts - we need to develop a more disciplined approach in balancing our online and offline interactions. During her January 17, 2011 interview with Stephen Colbert, Turkle summed this up by saying "we have to put technology in its place".

McGonigal offers a more positive perspective on the value of online interactions. In a Wall Street Journal article on The Benefits of Videogames (adapted from her book), she implicitly questions whether we ought to lean into discomfort of real life, and promotes the embrace of the relatively friction-free engagement more readily available in online games.

Gamers want to know: Where in the real world is the gamer's sense of being fully alive, focused and engaged in every moment? The real world just doesn't offer up the same sort of carefully designed pleasures, thrilling challenges and powerful social bonding that the gamer finds in virtual environments. Reality doesn't motivate us as effectively. Reality isn't engineered to maximize our potential or to make us happy.

However, while McGonigal advocates the benefits of gaming and online interactions, she does not turn her back on real life. During her February 3, 2011 interview with Stephen Colbert, she noted studies showing that "the emotions we feel in games spill over into our real lives", and describes a game, Evoke, that she designed to promote the creation of social enterprises to solve real world problems such as poverty and hunger in sub-Saharan Africa.

I don't believe that McGonigal believes that reality is completely broken, nor do I believe that Turkle believes that virtuality - sociable robots and online interactions - is completely broken. However, each expresses a very different point of view about the relative costs and benefits of online vs. offline interactions.

I recently posted some slides I used for a guest lecture conversation on human-robot interaction in a human-computer interaction course at the University of Washington, Tacoma, which reference Alone Together and Reality is Broken, as well as a number of other ideas and perspectives on the topics of interacting with humans vs. robots and online vs. offline.

After tweeting a link to the slides, I followed up with another tweet:

Tweet-STurkle-vs-avantgame
Shortly thereafter, the current issue of ACM interactions magazine arrived, reminding me about a Social Media forum that I now edit in the magazine, which may offer an ideal venue in which to host a debate between these two inspiring authors. In my introductory article - which appears in the current issue (Bridging the Gaps between HCI and Social Media, which has not yet appeared is now available online) - I wrote about my intention to use the forum to reflect the conversational nature of social media, and to bring together short and potentially conflicting contributions by multiple thinkers and doers on topics of interest to the HCI community.

I'm hoping I can convince Sherry Turkle and Jane McGonigal to participate in an upcoming forum, as I think the insights and experiences they offer would be of great interest and benefit to the ACM interactions community. If you agree that this would be a worthy endeavor, please feel free to tweet a link to this post and CC @STurkle and/or @avantgame.

Click here to tweet this


Nothing brings people together like ignoring each other to stare at their phones

SanityFearAppIcon Last night, on the Colbert Report, near the beginning of the segment on Fear for All, Part I, host Stephen Colbert announced the new Rally to Restore Sanity and/or Fear app for the iPhone (also available in the Android store).

The app was developed by MTV Networks for the upcoming combined Rally to Restore Sanity (instigated by The Daily Show's Jon Stewart) / March to Keep Fear Alive (instigated by Colbert) in Washington, DC, this Saturday, an event that has received considerable attention over the past few weeks on Comedy Central, Fox News and other traditional and new media outlets (though the rally will apparently not be receiving any direct attention from National Public Radio).

PeopleStaringAtPhones Colbert highlighted several benefits to this new mobile social activist application:

If you're going to the rally, well, there's an app for that ... It's really cool! You can use the app to get directions to the rally, check-in on Foursquare, post photos to Facebook and Twitter, and you get a special video message from Jon [Stewart] and me on the morning of the rally. This app will truly enhance your rally experience, because nothing brings people together like ignoring each other to stare at their phones. [emphasis mine]

image from blogs.reuters.com These "features" for enhancing physical world experiences reflect the tensions I recently wrote about regarding the Starbucks Digital Network and its impact on engagement and enlightment on physical world "third places". Although I have not precisely measured it, I have perceived an increasing trend of people standing or sitting together in Starbucks and becoming ever more effective at ignoring each other by staring at / typing on their phones (or laptops), and I predict less physical world engagement will result from the greater online engagement provided by this new location-based network. This may not be universally seen as a "bug" by all, but I have been encouraged to read others urging a shift of attention from the online back into the offline, such as Lewis Howes' recent post predicting the offline shift is coming, and John Hagel and John Seely Brown's recent article in Harvard Business Review proclaiming the increasing importance of physical location.

Malcolm Gladwell has also addressed the relative tradeoffs between online and offline engagement, touching off a firestorm of controversy in a New Yorker article criticizing online social networks such as Twitter and Facebook and their impact on social activism in the physical world: Small Change: Why The Revolution Will Not Be Tweeted.

image from www.rallytorestoresanity.com The Rally to Restore Sanity, however, is more about resolution than revolution:

We’re looking for the people who think shouting is annoying, counterproductive, and terrible for your throat; who feel that the loudest voices shouldn’t be the only ones that get heard; and who believe that the only time it’s appropriate to draw a Hitler mustache on someone is when that person is actually Hitler. Or Charlie Chaplin in certain roles.

image from www.keepfearalive.com The March to Keep Fear Alive is, of course, also intended to promote reasonableness, though employing the kind of parody traditionally used by Colbert in drawing attention to the fear that is regularly promulgated through other media channels:

America, the Greatest Country God ever gave Man, was built on three bedrock principles: Freedom. Liberty. And Fear — that someone might take our Freedom and Liberty. But now, there are dark, optimistic forces trying to take away our Fear — forces with salt and pepper hair and way more Emmys than they need. They want to replace our Fear with reason. But never forget — “Reason” is just one letter away from “Treason.” Coincidence? Reasonable people would say it is, but America can’t afford to take that chance.

I will not be present at the rally / march in Washington, DC, but I may attend the Rally to Restore Sanity in Seattle. In any case, I will be tuning in to the main rally  /march remotely - perhaps using my iPhone - to see how the resolution or revolution will be tweeted.

The Colbert Report Mon - Thurs 11:30pm / 10:30c
Fear for All Pt. 1
www.colbertnation.com
Colbert Report Full Episodes 2010 Election March to Keep Fear Alive

Update, 2010-11-16: Perhaps due to the fact that the only commercial TV I watch with any regularity is the Comedy Central "news" hour - The Daily Show and The Colbert Report - and even those I typically watch via buffering on my DVR to skip commercials, I was not aware of the Microsoft Windows Phone ad campaign launched earlier in October that promotes the theme of phone-based obsessive-compulsive disorder that Colbert is alluding to. While I like the video, I don't see how this would motivate people to buy Windows Phones (say, instead of iPhones or Androids), but perhaps the goal was simply to draw some attention to Windows Phone. In any case, I'm embedding the Windows Phone "Really" advertisement below.

And finally, just for good measure, I'll embed what I see as the classic short video in this genre, Crackberry Blackberry (though I do not believe this was ever used as a marketing tool by Research in Motion). Interestingly, it was prefaced by yet another Windows Phone ad when I watched it just now.


Empowered: More Platform Thinking, De-Bureaucratization and Redistribution of Agency

Empowered-book The new book, Empowered, by Josh Bernoff and Ted Schadler of Forrester Research, proclaims an inspiring message: social media is increasingly empowering customers to draw attention to their problems, and the best way for businesses to provide effective solutions is to empower their employees with the same tools. The book makes a strong case for universal employee empowerment by including numerous case studies of companies that have benefited from successfully empowering their employees, as well as a few cases where companies suffered as a result of bureaucratic encumbrances. The main quibble I have with the book is the use of what I consider to be questionable quantitative data, but I don't see that data as essential to the empowering message or case studies presented.

The book describes four technology trends - the proliferation of smart mobile devices, pervasive video, cloud computing services and social technology - and presents a number of case studies about how people are taking advantage of these trends to achieve their goals, sometimes to the detriment of institutions that are not yet taking advantage of them. The authors argue that employee empowerment is more of a management challenge than a technical challenge at this stage, and they effectively highlight the ways that proactive employees - called HEROes (Highly Empowered and Resourceful Operatives) - can use the same tools that empower customers to respond more effectively to their needs. I see many similarities between HEROes and the e-Patients ("engaged, empowered, equipped and expert") I first discovered via Regina Holliday, "e-Patient Dave" deBronkart, Susannah Fox and other Health 2.0 heroes who are advocating platform thinking, de-bureaucratization and the redistribution of agency. [Update: just saw a tweet by @ReginaHolliday to another new book, The Empowered Patient, by Julia Hallisy, suggesting even more convergence - and momentum - in this area.] At the risk of adding the ubiquitous version number to yet another class of agency, I found myself thinking that perhaps we're also entering the era of Employee 2.0.

ItSuckedAndThenICried The book starts off with a case study involving Heather Armstrong, a mommyblogger and author with over a million followers on her Twitter account (@dooce), who experiences a series of mechanical and customer service problems with her new Maytag washing machine during the first few months after her second child was born. In a blog post containing a capital letter or two, capturing the series of problems and failed solutions, she writes about an exchange with an unempowered customer service representative:

And here's where I say, do you know what Twitter is? Because I have over a million followers on Twitter. If I say something about my terrible experience on Twitter do you think someone will help me? And she says in the most condescending tone and hiss ever uttered, "Yes, I know what Twitter is. And no, that will not matter."

I read this and immediately experienced a visceral "Uh, oh..." moment, sort of like watching a horror movie where the naive victim-to-be is about to open a door you just know they shouldn't. As anticipated, she then proceeds to share her frustrations with Maytag with her Twitter followers in a series of status updates. It is difficult to directly measure the long-term influence of this negative publicity, but I would imagine that many of Heather Armstrong's followers were / are young mothers with significant laundering needs who might also be in the market for a washing machine, and would be considerably less likely to purchase a Maytag after reading about her experiences.

Twelpforce This experience is contrasted with that of Josh Korin (@joshkorin), a recruiter with a more modest Twitter following (596) at the time of a suboptimal experience with an Apple iPhone purchased at BestBuy. Like Heather, Josh tweeted about his frustrations with customer service - they initially offered to replace his new iPhone with a Blackberry, even though he'd purchased the insurance plan. However, BestBuy had an empowered TwelpForce in place that monitors and responds pomptly to customer service problems expressed in social media streams (e.g., tweets addressed to @bestbuy or with the #bestbuy or #twelpforce hashtag). Even though Josh posted these messages on a Saturday, he promptly received responses from BestBuy CMO Barry Judge (@bestbuycmo) and empowered "community connector" Coral Biegler (@coral_bestbuy), and an iPhone replacement was arranged that Sunday, transforming a disgruntled customer into an advocate.

The second part of the book explores another acronymized set of concepts, IDEA: Identify mass influencers, Deliver groundswell customer service, Empower customers with mobile information, and Amplify the voice of your fans. I like the ideas [pun partially intended] in this section, and found the additional case studies presented interesting and compelling. However, this is where I encountered questionable data on peer influence metrics, which is based on Forrester's North American Technographics Empowerment Online Survey, Q4 2009 (US). The normal biases that arise in self-reporting (people generally tend to present themselves and their actions in a favorable light) are compounded when one is asking people - in an online survey - about how much online influence they have. I would expect natural "inflationary pressures" would lead respondents to overestimate the number of friends and followers they have, the frequency with which they post social media messages (e.g., Facebook or Twitter status updates) and the percentage of those messages that are about products and services.

To their credit, Forrester provides disclaimers on its web page for the survey, which very carefully highlight the sources of sample bias:

Please note that this was an online survey. Respondents who participate in online surveys have in general more experience with the Internet and feel more comfortable transacting online. The data is weighted to be representative for the total online population on the weighting targets mentioned, but this sample bias may produce results that differ from Forrester’s offline benchmark survey. The sample was drawn from members of MarketTools’ online panel, and respondents were motivated by receiving points that could be redeemed for a reward. The sample provided by MarketTools is not a random sample.

TheTippingPoint-cover Taking a cue from Malcolm Gladwell's 2000 book, The Tipping Point, the potentially biased survey data is used primarily to establish categories of Mass Connectors - the 6.2% of online users who generate 80% of the online impressions (status updates) across social media streams, each clocking in with an average of 537 followers and making an estimated 18,600 impressions per year - and Mass Mavens - the 13.4% of online users who generate 80% of the online posts (blog posts, blog comments, discussion forum posts, and product reviews), clocking in with 54 product or service-related posts per year (vs. the overall average of 6 per year).

Now, just to be clear, as someone who ardently believes that all studies and models are wrong [including my own], but some are useful, I believe that these are useful categories, and while I might question the actual numbers, I do believe that some people are more influential - as mavens and/or connectors - than others. However, I think it's important to note that there are significant questions about the extent of influence mavens and connectors have. For example, Clive Thompson's Fast Company article, Is the Tipping Point Toast?, contrasts Gladwell's focus on an elite few with Duncan Watts' more expansive idea of the connected many with respect to the sources of real influence in society. And given more recent views expressed by Gladwell this week in a New Yorker article on Twitter, Facebook and social activism: Why The Revolution Will Not Be Tweeted, I suspect he may have reservations about his categories of influentials being mapped onto social media at all.

Slack-getting-past-burnout-busywork-and-the-myth-of-total-efficiency The reason I delve so deeply into this issue is that I actually believe that the influence of the connected many is better aligned with the overall message of Empowered than the elite few, and that the authors do themselves - and their message - a disservice via this detour in an otherwise engaging and enlightening book. They talk of efficiency in many places where I think they - and their readers (and clients) - would be best served by focusing on effectiveness (as Tom DeMarco effectively focuses on in his book, Slack). Should HEROes only focus on addressing their efforts toward the Mass Mavens and/or Mass Connectors? That would be efficient, I suppose, but would probably not be very effective.

As an example, another compelling case study described in Empowered is the experience of Dave Carroll, a "not-very-well-known local musician" from Halifax, Nova Scotia, whose guitar was allegedly broken by United Airlines baggage handlers at Chicago O'Hare International Airport on March 31, 2008. Dave responded by recording and posting a trilogy of songs, United Breaks Guitars, on YouTube (the first one, which now has over 9 million views, is embedded below).

As far as I can tell, Dave Carroll - while certainly talented - was probably not very influential at the time he recorded that music video, and if United customer service HEROes (if they exist[ed]) were to focus their efforts primarily on Mass Mavens or Mass Connectors, the empowered response by Dave Carroll may have still slipped under their radar. And yet his video turned out to be very influential: according to the authors, Sysomos estimates that positive sentiment for United Airlines in the blogosphere decreased from 34% to 28% and negative sentiment increased from 22% to 25%, while the proportion of positive stories about United in traditional media went from 39% to 27% with negative stories rising from 18% to 23%. [I recommend Dan Greenfield's analysis of the the social media impact of the United Breaks Guitars video at SocialMediaToday for anyone interested in more details.]

I've written before about how everyone's a customer. I think the central message of Empowered is - or should be - every customer matters.

In another inspiring case study - and this is the last one I'll share here - Kira Wampler, former online engagement leader for the small business division of Intuit (maker of QuickBooks) and now a principal at Ants Eye View, said that her primary customer service goal at Intuit was not to deflect as many calls as possible, but "how do I get you unstuck as quickly as possible?" This reflects a wisdom so clearly articulated in Kathy Sierra's Creating Passionate Users blog, e.g., her post on keeping users engaged, in which she so pithily promotes an empowerment strategy: Give users a way to kick ass.

As Josh Bernoff and Ted Schadler convincingly show, customers have never been so empowered to "kick ass" as they are now. I hope that more businesses will follow their prescriptions to "unleash your employees, energize your customers and transform your business" ... or, as Kathy Sierra might put it, Give employees a way to kick ass!