Previous month:
June 2010
Next month:
August 2010

July 2010

Mobile Internet Intent, Action and Inaction

I've recently encountered a number of interesting studies, discussions and disagreements regarding the use of mobile phones to access the Internet. As an iPhone user, I love having the Internet in my pocket, but I often find myself deferring actions that require significant involvement until I have the Internet on my lap, using the larger screen and keyboard of my MacBook Pro. A face-to-face discussion with a friend who, like me, tends to defer reading or acting on long emails - and then often forgetting them entirely - when he sees them on his iPhone, followed by an online discussion with an iPhone user who sees no difference in his Internet-related activities conducted through his iPhone vs. laptop, prompted me to dig a little deeper for broader trends.

image from e-patients.net The online disagreement occurred in a comment exchange on an e-Patients.net post seeking input on an upcoming Pew Internet survey on health topics by Susannah Fox. The thread started with a comment on accessing health information via mobile phones, and I joined in with an observation about the shallow skimming I tend to do when accessing information on my iPhone, and projecting that onto the population at large:

What I wonder, though, is how the constrained size and interface of a mobile device affects the willingness of information seekers to delve more deeply – or broadly – into online sources to verify the information they find. I suspect that people who read email on mobile phones vs. laptops / desktops tend to read less of the messages and read / skim them more quickly, and suspect that mobile phone access to other information sources would promote shallower and/or quicker processing.

A short while later, Dennis expressed a different experience and projection:

I think your suppositions about mobile use of resources are unfounded. I hardly ever read email on the laptop anymore. I write all of my longer letters there, but I read most of my email on my iPhone. Why would phone reading and research be any quicker or more cursory than on a large screen? I’ve read whole novels on my phone. Whether I read or look up information on the phone or laptop is more a function of where I am and which tools are available than of the thoroughness with which I intend to study.

In searching for larger-scale studies (at least N > 2) comparing mobile Internet use with use on laptops or desktops, I came across a few reports that may help inform the discussion ... of course, with the caveat that all studies are wrong, but some are useful.

One is the recent Pew Internet report, Mobile Access 2010:

image from www.pewinternet.org

The Pew study breaks down wireless online access into separate activities, and notes that 34% of cell phone users send or receive email via their phones, up from 25% in 2009. The most recent Pew study I know of that investigated email use - Generations Online in 2009 - showed that 91% of online American adults use email, and 81% do research online (though these numbers vary widely based on age group).

Given that 82% of American adults own cell phones (in 2010), and 34% of them use email on cell phones, approximately 28% of all adult Americans send or receive email on cell phones. The 2009 Generations Online study reported that 74% of all Americans go online, and 91% of them use email, so as of a year ago, 67% of all Adult Americans were sending or receiving email through some device. The 2009 report did not break out the use of different devices, but I would estimate that more than 91% of wireless laptop users use email. The 2009 report also noted that 89% of online adult Americans went online to search for information, but I don't see any breakdown of mobile search in the 2010 report.

image from google.com Maryam Kamvar and her colleagues at Google and Stanford published a peer-reviewed paper at the WWW 2009 conference on Computers and iPhones and Mobile Phones, oh my!: A logs-based comparison of search users on different devices [PDF]. The study was based on over 100,000 English language Google search queries issued by over 10,000 users during a 35-day period during the summer of 2008.

The results suggest that search usage is much more focused for the average mobile user than for the average computer-based user. However, search behavior on high-end phones resembles computer-based search behavior more so than mobile search behavior.

Among their findings were that average query lengths and diversity of query topics were nearly identical between desktop and iPhone users, and both were very different from users of other mobile phones. However, the differences in average number of searches per session was more evenly distributed across the three types of platform (i.e., iPhone users were less like computer users in the number of searches per session than in other measures), leading the researchers to speculate that

Perhaps users on mobile devices are more likely to query topics which have a “quick answer” available ... Taking this hypothesis a step further, we suggest that perhaps users are simply unwilling to explore topics in depth as the barriers to exploration (text entry, network latency) increases.

image from www.ruderfinn.com Another interesting and relevant study, corroborating the Google study though limited to a U.S. sample population, is the Mobile Intent Index by Ruder Finn. Among the findings:

Mobile phones are not a learning tool. Mobile users (76%) are much less likely than all users (92%) to go online to learn. Learning requires time and patience, something mobile phone users are in short supply of.

  • They (64%) are 1.5 times less likely than the traditional user (96%) to go online to educate themselves
  • They (64%) are 1.4 times less likely than the traditional user (94%) to go online to research.
  • They (95%) are more likely than the traditional user (86%) to go online to keep informed.

I'd mentioned this study in a comment on the aforementioned e-Patients post, but could not find information about the data and methods it was based on. Christina Fallon, a Senior VP at Ruder Finn, was kind enough to send me a copy of the executive summary and fill in some of these details:

Ruder Finn’s Mobile Intent Index is the first study of its kind to examine the underlying motivations or reasons – intents – people have for using their mobile phones. The representative and Census-balanced online study of 500 American adults 18 years of age and older who “use their mobile device to go online or to access the Internet” was conducted in November 2009 by RF Insights among respondents who belong to Western Wats’ large consumer panel, Opinion Outpost. The margin of error is +/- 4.4% (95th confidence interval).

image from www.marketingcharts.com The Google and Ruder Finn studies help shed some light on how the use and affordances of different devices affects search behavior with respect to research and learning, but does not address the affect of devices on email use. Another international study, the Epsilon Global Consumer Email Study 2009, offers some additional insights into this dimension of mobile Internet access. The Epsilon study was conducted in conjunction with ROI Research in April 2009 with over 4000 consumer respondents in select countries in North America, Asia Pacific and Europe. One of the interesting regional differences it found was that the proportion of people who use PDAs for Smartphones for email was more than 3 times higher in the Asia Pacific region (32%) than North America (9%) or Europe (7%):

EpsilonEmailStudy2009

[I suspect that part of the reason for the discrepancy between the numbers in the Pew study (as of 2009, 25% of 74% = 19%) and the Epsilon study (9%) was the focus on smartphones in the Epsilon study; email is accessible on phones that are not smartphones.]

image from i3.campaignmonitor.com Another study was referenced in a June 30, 2010 press release reporting on People More Likely To Spend More Time Reading Email On Mobile Devices. The study was based on an analysis of 14 million email messages tracked by the email marketing analysis firm, Litmus. Among the more interesting findings:

  • Users of iPhones or Android-equipped devices are likely to spend 15% more time reading emails than people using Microsoft Outlook
  • Over 50% of email recipients delete messages within two seconds after opening

I spoke with Paul Farnell, the CEO, earlier today, and he told me the data was based on approximately one month's worth of email sent by the firm's clients; and so it should be noted that the sample is primarily made up of marketing-oriented email rather than a more general set of messages and message sources.

When we discussed the possible explanations for longer reading times, he agreed that it was more likely due to the small screen and extra scrolling required on mobile devices than a reflection of extra care and attention being devoted to email when read on an iPhone or Android. He did not know the breakdown of what I might call the 2-second rule of email across different devices, but I think it would be very interesting to know more about differences in relative propensity toward action (or inaction) when reading email on mobile vs. laptop or desktop devices. For example, are people more or less likely to click on links, reply to and/or forward email when they read it on a mobile device? Paul said he'd look into this, and I'll post an update if / when I find out more ... but I think I've taken sufficient action on my intent to find out - and share - more about mobile Internet use for one day.

Update, 2010-08-09: Paul followed up with a breakdown of email access and action on a mobile vs. desktop computer, based on a larger data set: 54 million email messages sent via the Litmus application. In his data set, 69.87% of email messages were read on a mobile device (e.g., iPhone) vs. only 58.37% of emails read on a desktop device. Other actions were also somewhat more likely on a mobile: 0.42% of emails accessed on mobile device were replied to or forwarded, whereas 0.33% of those accessed via a desktop computer were replied to or forwarded.

Update, 2010-08-02: Nielsen released a report on What Americans Do Online: Social Media and Games Dominate Activity, which shows that email is still the most popular mobile Internet activity. Users spend 41.6% of their mobile Internet time using email, up from 37.4% a year ago, and compared with spending only 8.3% of their total Internet time - across devices - on email.

Update, 2010-08-05: Jeff Pierce, manager of the Mobile Computing Research group at IBM Research Almaden, gave a presentation on Triage and Capture: Rethinking Mobile Email at the Web 2.0 Expo SF 2010 in May. I'd earlier written a post on serendipity platforms, unintended consequences and explosive positivity at Web 2.0 Expo based on some of Day 1 keynotes I'd watched remotely. Jeff's presentation on Day 3 - a video from which is embedded below - appears to be based on two recent tech reports from IBM Research:

Jeff and his colleagues found that there are substantial differences in the patterns of use of mobile email vs. desktop email, in particular, they observed that "mobile email users primarily triage messages (identifying which to delete, which to handle immediately, and which to defer) and defer handling most messages until they reach a larger computer". They also propose guidelines for - and have prototyped - a new approach to integrating mobile email, desktop email and web services to better capture the intended actions and support the "triage" mode in which mobile email is typically processed.


Steve Miller Band at Chateau Ste. Michelle: a concert review

Steve Miller Band @ Chateau Ste. Michelle We saw Steve Miller Band in our first concert of the 2010 summer season at Chateau Ste. Michelle on Wednesday night. CSM summer concerts tend to include a preponderance of rock stars from the 60s and 70s, some of whom are looking, sounding and performing better than others, so many decades after their heydays. Steve Miller is definitely one of those who is still faring well after all these years. His voice, guitar playing and showmanship are still going strong, and he had a good band - and a number of special guest stars - to accompany him.

He played one long set, with one encore in which he responded to audience requests. Despite our efforts to get him to play Your Saving Grace, he responded to louder requests for Jungle Love (or, as at least one particularly loud requester was referring to it, "Chug-a-lug"). Surprisingly to me, throughout most of the concert, the keyboardist also provided the bass lines. The backup guitarist played bass guitar on a few songs, but it was mostly keyboards throughout (not that I would have noticed if I hadn't seen the players assembled on stage).

As is so often the case at these oldies but goodies (and not so goodies) concerts at CSM, I think SMB could have provided far more opportunities for audience sing-alongs. He did invite us all to sing the refrain during Space Cowboy, but I'm often surprised at how little beloved musicians are willing to engage their long-time fans in a more participatory experience ... especially fans like me who used to play his songs in an amateur rock band years ago. I'm sure the music quality is higher with the professionals performing, but I suspect the people known as the "audience" would welcome more opportunities to share more prominently in the music-making.

Steve Miller Band @ Chateau Ste. Michelle Among the highlights of the concert were

  • Gerald Johnson (bass), a former member of the Steve Miller Band, and Randy Hansen (guitar), a local Jimi Hendrix cover artist, joining the band for some cover songs by Eric Clapton, Jimi Hendrix, Slim Harpo and Muddy Waters - shown in the image at the top.
  • Dillon Brown, a young (high-school age) guitarist from the Kids Rock Free program who joined the band for several numbers toward the end - shown in the image at the right.
Here's the set list, as best as I can read my scrawled notes from the concert:
  • Jetliner
  • Take the Money and Run
  • Mercury Blues
  • Hey Yeah
  • Come On (Let the Good Times Roll)
  • Further On Up the Road [Eric Clapton]
  • Ain't No Telling [Jimi Hendrix]
  • Got Love If You Want It [Slim Harpo]
  • I Can't Be Satisfied [Muddy Waters]
  • Shu Ba Da Du Ma Ma Ma Ma
  • Seasons
  • Wild Mountain Money
  • Dance, Dance, Dance
  • Space Cowboy
  • Abracadabra
  • Ooh Poo Pah Do
  • Tramp
  • Don't Cha Know
  • Serenade
  • Living in the USA
  • Rock'n Me
  • Fly Like an Eagle
  • The Stake
  • Jungle Love

Update, 2010-07-27: Just read about ThingLink's photo tagging service; trying it out on a larger version of the first photo above. Mouse over / click on people in the image to see who's who.


All models, studies and Wikipedia entries are wrong, some are useful

A sequence of encounters with various models, studies and other representations of knowledge lately prompted me to reflect on both the inherent limitations and the potential uses of these knowledge representations ... and the problems that ensue when people don't fully appreciate either their limitations or applications ... or the inherent value of being wrong.

ScienceNewsCycle Daniel Hawes, an Economics Ph.D. student at the University of Minnesota, analyzed the Science Secret for Happy Marriages, examining a study correlating comparative attractiveness of spouses and the happiness of marriages. He notes that many reports of the "result" - the prettier a wife in comparison to her husband the happier the marriage - did not note the homogeneity of the population, particularly the early stage of marriage for most subjects in the study, the lack of control for inter-rater variability in measuring attractiveness and happiness, or the potential influences of variables beyond attractiveness and happiness. These limitations were reported in the original study, but not in subsequent re-reports, leading Hawes to reference a very funny PHD Comics parody of The Science News Cycle and conclude with the rather tongue-in-cheek disclaimer:

This blog post was sponsored by B.A.D.M.S (Bloggers against Data and Methods Sections) in honor of everybody who thinks (science) blogs should limit themselves to reporting correlations (and catchy post titles).

A while later, in a blog post about his Hyptertext 2010 keynote on Model-Driven Research in Social Computing, former University of Minnesota Computer Science Ph.D. student and current PARC Area Manager and Principal Scientist Ed Chi offered a taxonomy of models - descriptive, explanatory, predictive, prescriptive and generative - and an iterative 4-step methodology for creating and applying models in social computing research - characterization, modeling, prototyping and evaluation. Most relevant in the context of this post, he riffed on an observation attributed to George Box

all models are wrong, but some are useful

All models - and studies - represent attempts to condense or simplify data, and any such transformations (or re-presentations) are always going to result in some data loss, and so are inherently wrong. But wrong models can still be useful, even - or perhaps particularly - if they simply serve to spark challenges, debate ... and further research. As an example, Ed notes how Malcolm Gladwell's "influentials theory", in which an elite few act as trend setters, was useful in prompting Duncan Watts and his colleagues to investigate further, and create an alternative model in which the connected many are responsible for trends. More on this evolution of models can be found in Clive Thompson's Fast Company article, Is the Tipping Point Toast?

BeingWrongBook Over the next few weeks, I encountered numerous other examples of wrongness, limitations, challenges and debate:

My most significant recent encounter with wrongness, limitations and debate was via Susannah Fox, Associate Director at the Pew Internet & American Life Project and a leading voice in the Health 2.0 community, who offered a Health Geek Tip: Abstracts are ads. Read full studies when you can. She describes several examples of medical studies whose titles or abstracts may lead some people - medical experts and non-experts alike - to make incorrect assumptions and draw unwarranted conclusions.

SanjayGupta In one case, “a prime example of the problem with some TV physician-'journalists'”, HealthNewsReview.org publisher Gary Schwitzer criticized Dr. Sanjay Gupta's proclamation that an American Society of Clinical Oncology study showed that "adding the drug Avastin to standard chemotherapy 'can slow the spread of [ovarian] cancer pretty dramatically'" as a dramatically unwarranted claim not supported by the study. I won't go into further details about this example, except to note with some irony that I had mentioned Dr. Gupta in my previous post about The "Boopsie Effect": Gender, Sexiness, Intelligence and Competence, in which he had complained that being named one of People Magazine's sexiest men had undermined his credibility ... and it appears that several people quoted in Schwitzer's blog post as well as in the comments are questioning Dr. Gupta's credibility, though I don't see any evidence that these doubts are related to his appearance.

My favorite example, even richer in irony, is what Susannah initially referred to as "an intriguing abstract that begs for further study: Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database". Another Health 2.0 champion, Gilles Frydman, tweeted a couple of questions about the study, regarding which types of cancers were covered and which version of the professionally maintained database was used. I've posted a considerable amount of cancer information on the Internet myself (a series of 19 blog posts about my wife's anal cancer), and I've long been fascinated with the culture and curation of Wikipedia, so I decided to investigate further.

ASCO The original pointer to the abstract came from a Washington Post blog post about "Wikipedia cancer info. passes muster", based on a study that was presented at the American Society of Clinical Oncology (ASCO). The post includes an interview with one of the study's authors, Yaacov Lawrence. I called Dr. Lawrence, and he was kind enough to fill me in on some of the details, which I then shared in a comment on Susannah's post. The study in question was presented as a poster - not a peer-reviewed journal publication - and represents an early, and rather limited, investigation into the comparative accuracy of Wikipedia and the professionally maintained database. At the end of our conversation, I promised to send him some references to other studies of the accuracy of Wikipedia, and suggested that the Health 2.0 community may be a good source of prospective participants in future studies.

Wikipedia But here's the best part: while searching for references, in the Wikipedia entry on the Reliability of Wikipedia, under the section on Science and medicine peer reviewed data, I found the following paragraph:

In 2010 researchers at Kimmel Cancer Center, Thomas Jefferson University, compared 10 types of cancer to data from the National Cancer Institute's Physician Data Query and concluded "the Wiki resource had similar accuracy and depth to the professionally edited database" and that "sub-analysis comparing common to uncommon cancers demonstrated no difference between the two", but that ease of readability was an issue.

And what is the reference cited for this paragraph? The abstract for the poster presented at the meeting:

Rajagopalan et al (2010). "Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database.". Journal of Clinical Oncology 28:7s, 2010. http://abstract.asco.org/AbstView_74_41625.html. Retrieved 2010-06-05.

So it appears we have yet another example of a limited study - that was not peer-reviewed - being used to substantiate a broader claim on the accuracy of Wikipedia articles on" Science and medicine peer reviewed data" ... in a Wikipedia article on the topic of Reliability of Wikipedia. Perhaps someone will eventually edit the entry to clarify the status of the study. In any case, I find this all rather ironic.

As with the other examples of "wrong" models and limited studies, I believe that this study has already been useful in sparking discussion and debate within the Health 2.0 community, and I'm hoping that some of the feedback from the Health 2.0 community - and perhaps other researchers who have more experience in comparative studies of Wikipedia accuracy - will lead to more research in this promising area.

[Update, 2010-09-01: I just read and highly recommend a relevant and somewhat irreverent article by Dave Mosher, The Ten Commandments of Science Journalism.]

[Update, 2011-03-16: I just read and highly recommend another relevant article on wrongness and medicine: Lies, Damn Lies and Medical Science, by David H. Freeman in the November 2010 edition of The Atlantic.]

[Update, 2011-04-21: Another relevant and disturbing post: Lies, Damn Lies and Pharma Social Media Statistics on Dose of Digital by Jonathan Richman.]

[Update, 2012-02-23: John P. A. Ioannidis offers an explanation for Why Most Published Research Findings are False in a 2005 PLoS Medicine article.]


Paro, Personal Robots, Emotional Intelligence and the Need to be Needed

ParoRobots Paro is a personal robot that looks like a baby harp seal and responds to changes in light, sound, temperature and touch. Research and development in artificial intelligence has traditionally focused on linguistic, logical or mathematical intelligence, although robotics has also involved the quest for imbuing machines with spatial and kinesthetic intelligence. Paro, however, seems designed more to embody emotional intelligence. And I would argue that the secret superpower of Paro is its ability to evoke emotions and address our fundamental need to be needed.

I recently heard an NPR interview with Amy Harmon about her NY Times article on Paro, A Soft Spot for Circuitry, in which recounted a number of interesting responses from Millie Lesek, an elderly woman who cared for Paro in a nursing home. Paro was first presented to Mrs. Lesek by a staff member who said she needed someone to babysit the robotic pet. Mrs. Lesek was happy to fulfill that need, forming a special bond with Paro, and eventually developed a stronger sense of being needed: “I’m the only one who can put him to sleep”.

Some researchers referenced in the article express concern about substituting a robot for a person - or a real pet - in relationships. However staff members at another retirement home noted that Paro tends to facilitate human interactions when other people are present (the way that real pets often do) rather than replace them. And when family, friends or staff are not - or cannot be - present, using a machine to evoke emotional responses and elicit a feeling of being needed seems like a powerful therapeutic tool ... with far less care and feeding required than real pets.

JoJo knows how to get attention I've long nursed a pet theory that the primary therapeutic benefit people derive from pets is not so much that our pets love us, but that we can love our pets ... with far less fear of the rejection we risk in loving other people. That is, it's the expression or giving of love rather than the receiving of love that really opens up the heart - and promotes other emotional and physiological benefits. Reading the article about Paro, I'm inclined to revisit and revise this theory. Perhaps it's not just loving someone - human, animal or robot - that makes us feel complete, it is the [perception of] being needed by someone we love that helps us feel like we matter ... like our life has purpose.

I remember reading about a study several years ago - that I cannot track down at the moment - in which a person who was asked to do a favor tended to express closer feelings toward the person asking for the favor than the favor requester tended to express toward the favor responder. That is, I'm more likely to feel closer to you if I feel you need me.

CheapTrickInColor The band Cheap Trick was also on to this back in 1977, with a somewhat less scholarly expression of this basic need:

I want you to want me.
I need you to need me.
I'd love you to love me.
I'm beggin' you to beg me.

TuringTest Returning to a more scholarly thread, the Turing Test was proposed in 1950 as a way to determine whether a machine was intelligent. The idea was to have a human interrogator in one room, and a computer and another human in another room, communicating only via written or typed questions and answers. If the interrogator was unable to differentiate which respondent was the machine and which was the human, the machine would be said to have achieved human-level artificial intelligence. 

Paro, however, represents an attempt to achieve embodied intelligence, which could not, by definition, be tested using a scheme in which the machine is isolated from the human interrogator. It is clear from Harmon's article that many of the people who interact with Paro do not believe that Paro is a real baby seal, and I don't know whether they would ascribe much logical intelligence to it, but I do suspect, based on the responses elicited from many of those who interact with it, that it is well on its way to demonstrating significant emotional intelligence.