Paro, Personal Robots, Emotional Intelligence and the Need to be Needed
Steve Miller Band at Chateau Ste. Michelle: a concert review

All models, studies and Wikipedia entries are wrong, some are useful

A sequence of encounters with various models, studies and other representations of knowledge lately prompted me to reflect on both the inherent limitations and the potential uses of these knowledge representations ... and the problems that ensue when people don't fully appreciate either their limitations or applications ... or the inherent value of being wrong.

ScienceNewsCycle Daniel Hawes, an Economics Ph.D. student at the University of Minnesota, analyzed the Science Secret for Happy Marriages, examining a study correlating comparative attractiveness of spouses and the happiness of marriages. He notes that many reports of the "result" - the prettier a wife in comparison to her husband the happier the marriage - did not note the homogeneity of the population, particularly the early stage of marriage for most subjects in the study, the lack of control for inter-rater variability in measuring attractiveness and happiness, or the potential influences of variables beyond attractiveness and happiness. These limitations were reported in the original study, but not in subsequent re-reports, leading Hawes to reference a very funny PHD Comics parody of The Science News Cycle and conclude with the rather tongue-in-cheek disclaimer:

This blog post was sponsored by B.A.D.M.S (Bloggers against Data and Methods Sections) in honor of everybody who thinks (science) blogs should limit themselves to reporting correlations (and catchy post titles).

A while later, in a blog post about his Hyptertext 2010 keynote on Model-Driven Research in Social Computing, former University of Minnesota Computer Science Ph.D. student and current PARC Area Manager and Principal Scientist Ed Chi offered a taxonomy of models - descriptive, explanatory, predictive, prescriptive and generative - and an iterative 4-step methodology for creating and applying models in social computing research - characterization, modeling, prototyping and evaluation. Most relevant in the context of this post, he riffed on an observation attributed to George Box

all models are wrong, but some are useful

All models - and studies - represent attempts to condense or simplify data, and any such transformations (or re-presentations) are always going to result in some data loss, and so are inherently wrong. But wrong models can still be useful, even - or perhaps particularly - if they simply serve to spark challenges, debate ... and further research. As an example, Ed notes how Malcolm Gladwell's "influentials theory", in which an elite few act as trend setters, was useful in prompting Duncan Watts and his colleagues to investigate further, and create an alternative model in which the connected many are responsible for trends. More on this evolution of models can be found in Clive Thompson's Fast Company article, Is the Tipping Point Toast?

BeingWrongBook Over the next few weeks, I encountered numerous other examples of wrongness, limitations, challenges and debate:

My most significant recent encounter with wrongness, limitations and debate was via Susannah Fox, Associate Director at the Pew Internet & American Life Project and a leading voice in the Health 2.0 community, who offered a Health Geek Tip: Abstracts are ads. Read full studies when you can. She describes several examples of medical studies whose titles or abstracts may lead some people - medical experts and non-experts alike - to make incorrect assumptions and draw unwarranted conclusions.

SanjayGupta In one case, “a prime example of the problem with some TV physician-'journalists'”, HealthNewsReview.org publisher Gary Schwitzer criticized Dr. Sanjay Gupta's proclamation that an American Society of Clinical Oncology study showed that "adding the drug Avastin to standard chemotherapy 'can slow the spread of [ovarian] cancer pretty dramatically'" as a dramatically unwarranted claim not supported by the study. I won't go into further details about this example, except to note with some irony that I had mentioned Dr. Gupta in my previous post about The "Boopsie Effect": Gender, Sexiness, Intelligence and Competence, in which he had complained that being named one of People Magazine's sexiest men had undermined his credibility ... and it appears that several people quoted in Schwitzer's blog post as well as in the comments are questioning Dr. Gupta's credibility, though I don't see any evidence that these doubts are related to his appearance.

My favorite example, even richer in irony, is what Susannah initially referred to as "an intriguing abstract that begs for further study: Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database". Another Health 2.0 champion, Gilles Frydman, tweeted a couple of questions about the study, regarding which types of cancers were covered and which version of the professionally maintained database was used. I've posted a considerable amount of cancer information on the Internet myself (a series of 19 blog posts about my wife's anal cancer), and I've long been fascinated with the culture and curation of Wikipedia, so I decided to investigate further.

ASCO The original pointer to the abstract came from a Washington Post blog post about "Wikipedia cancer info. passes muster", based on a study that was presented at the American Society of Clinical Oncology (ASCO). The post includes an interview with one of the study's authors, Yaacov Lawrence. I called Dr. Lawrence, and he was kind enough to fill me in on some of the details, which I then shared in a comment on Susannah's post. The study in question was presented as a poster - not a peer-reviewed journal publication - and represents an early, and rather limited, investigation into the comparative accuracy of Wikipedia and the professionally maintained database. At the end of our conversation, I promised to send him some references to other studies of the accuracy of Wikipedia, and suggested that the Health 2.0 community may be a good source of prospective participants in future studies.

Wikipedia But here's the best part: while searching for references, in the Wikipedia entry on the Reliability of Wikipedia, under the section on Science and medicine peer reviewed data, I found the following paragraph:

In 2010 researchers at Kimmel Cancer Center, Thomas Jefferson University, compared 10 types of cancer to data from the National Cancer Institute's Physician Data Query and concluded "the Wiki resource had similar accuracy and depth to the professionally edited database" and that "sub-analysis comparing common to uncommon cancers demonstrated no difference between the two", but that ease of readability was an issue.

And what is the reference cited for this paragraph? The abstract for the poster presented at the meeting:

Rajagopalan et al (2010). "Accuracy of cancer information on the Internet: A comparison of a Wiki with a professionally maintained database.". Journal of Clinical Oncology 28:7s, 2010. http://abstract.asco.org/AbstView_74_41625.html. Retrieved 2010-06-05.

So it appears we have yet another example of a limited study - that was not peer-reviewed - being used to substantiate a broader claim on the accuracy of Wikipedia articles on" Science and medicine peer reviewed data" ... in a Wikipedia article on the topic of Reliability of Wikipedia. Perhaps someone will eventually edit the entry to clarify the status of the study. In any case, I find this all rather ironic.

As with the other examples of "wrong" models and limited studies, I believe that this study has already been useful in sparking discussion and debate within the Health 2.0 community, and I'm hoping that some of the feedback from the Health 2.0 community - and perhaps other researchers who have more experience in comparative studies of Wikipedia accuracy - will lead to more research in this promising area.

[Update, 2010-09-01: I just read and highly recommend a relevant and somewhat irreverent article by Dave Mosher, The Ten Commandments of Science Journalism.]

[Update, 2011-03-16: I just read and highly recommend another relevant article on wrongness and medicine: Lies, Damn Lies and Medical Science, by David H. Freeman in the November 2010 edition of The Atlantic.]

[Update, 2011-04-21: Another relevant and disturbing post: Lies, Damn Lies and Pharma Social Media Statistics on Dose of Digital by Jonathan Richman.]

[Update, 2012-02-23: John P. A. Ioannidis offers an explanation for Why Most Published Research Findings are False in a 2005 PLoS Medicine article.]

comments powered by Disqus