By
http://apis.shorturl.com
“Many beekeepers see scientists as employed to solve applied problems and publish the results in accessible trade journals. They often have little patience for research published in scientific journals, especially that which they perceive has little practical value. A good many researchers, on the other hand, see beekeepers as supplying little, if any, funding. As a consequence, they have little patience for what they often view as complaints by a cadre of folks who are not informed about what really is involved in bee research.
Unfortunately, this conflict sometimes leads to beekeepers becoming fed up with researchers, and vice versa. In the worst-case scenario beekeepers may accuse researchers of complacency, even complicity, in ignoring their needs. At the same time scientists can lose respect for beekeepers, who they perceive as ungrateful for research even when it does directly affect their livelihood.
“At the convention, several conclusions were reached. Quality research isn't easy. It takes patience, time, money and adequate controls. In 1985, I wrote an essay in these pages (May 1985) about the latter issue with reference to tracheal mite studies. It part it read, ‘…no experiment is worth much without a control, an untreated colony in the exact same state genetically, qualitatively (same stores, amount of brood) and infested to the same degree as the colony being treated. This provides the basis for comparison to show a material's effectiveness. In bee research, developing effective control colonies is often the most difficult part of an experiment. This is because to be shown to be generally effective, experiments must usually be conducted on a large scale involving a great number of both infested and control colonies.’”2
This brings to mind a recent flap that has mostly escaped the readers of Bee Culture who are not part of the enlarged online beekeeping community that routinely uses electronic communications. In my electronic Apis Newsletter at the Yahoo.com site, I wrote the following about the September 2005 edition of Bee Culture:
“Hans-Otto Johnsen discusses
commercial beekeeping in
Indeed publication of this study also printed in The Beekeepers Quarterly, a British magazine, resulted in several strong responses. Some in the bee-l community saw the article as vindicating arguments that small cell size should be further investigated as a means to control Varroa and other bee maladies. When I inquired further about this article from some of those who collaborated on the project, I received the following: “Illegal publication of test results!...It is important for the Norwegian Beekeepers Association to point out that the test is not finished, that the results in the mentioned articles (sic) is taken out of a larger context, and that Johnsen has published some of the preliminary results without the approval of the Norwegian Beekeepers Association.”
Soon after I wrote that, a rebuttal came from Mr. Johnsen, which I chose to publish in my November issue: “My article is about my surviving as a truly organic beekeeper. In the concept for me surviving with my 600 colonies, small cell size is a vital part and the figures are mentioned to give the background for why small cell size is important in my concept. The mentioned figures are results from hives which I've got with the design described.” It is important to note that this quote did not come directly from Mr. Johnsen who apparently does not use a computer and so did not see the original online postings, but indirectly from two other persons who reportedly got feedback through personal communication with him.
This provoked other replies concerning validity of the information reported. The discussion can be gleaned from the Web and is not the focus of this article. However, Jim Fischer whose words have graced Bee Culture in the past concluded in one of his replies: “…if one is participating in an organized research effort, it is generally assumed that one will follow a specific protocol, contribute one's data, and let all the data be analyzed before making any possibly rash statements about what is seen in a mere subset of the data.
“It is a shame that the actual paper may be blocked from being published in a peer-reviewed journal due to this ‘pre-publication’ of partial data by one ‘loose cannon’ among the large number of people who participated in the effort. Peer-reviewed science
journals most often flatly refuse to publish research that has been already reported on by the popular (layman's) press or another journal before being published in their journal.
“The net result may be to take hard work by many people resulting in good hard data, and make it all seem ‘questionable’ or ‘unpublishable’ simply due to this error in judgment by one participant. That's a shame when the goal seems to have been to do a large-scale study and have the results be accepted as ‘Science’ with a capital ‘S’.
“So, it is not about ‘freedom of speech’, its not about ‘turf’, it’s not about ‘ego’, and it’s
certainly not about what any one participant THINKS he might be able to conclude from his hives alone. It is about doing science, working as a member of a team, and refraining from grandstanding to get one's 15 minutes of fame. This is expected in any multi-researcher effort. Violate these basic rules, and no one will ever want work with you again in this lifetime.
“That didn't happen. Two magazines got conned, and so did an entire national beekeeping group. That's sad.” In addition responding to another statement, he said, “I think it was made clear that it was all about statistical significance.”4
Since the above discussion, several items have come to my attention regarding the scientific publication process. An article appearing in The Economist took on the topic of scientific accuracy and its relation to statistics.5
“Theodore Sturgeon, an American science-fiction writer, once observed that ’95% of everthing is crap’. John Ioannidis, a Greek epidemiologist, would not go that far. His benchmark is 50%. But that figure, he thinks, is a fair estimate of the proportion of scientific papers that eventually turn out to be wrong.
“Dr. Ioannidis, who works at the
A major source of error is an “unsophisticated” reliance on “statistical significance,” according to the article, which says: “To qualify as statistically significant a result has, by convention, to have odds longer than one in twenty of being the result of chance. But, as Dr. Ioannidis points out, adhering to this standard means that simply examining 20 different hypotheses at random is likely to give you one statistically significant result. In fields where thousands of details have to be examined…many seemingly meaningful results are bound to be wrong just by chance.”
Another problem many in bee research can relate to is small sample size. The greater number of colonies to which experimental treatments are applied, the better will be the resulting information. However, the more colonies one includes in a study the more difficult and expensive it becomes. There are also more insidious sources of error, which often can equally affect beekeeper-initiated research listed by Dr. Ioannidis. These include studies showing “weak effect,” such as a drug that works only on a small number or patients (bee colonies), or poorly-designed research allowing fishing for results beneficial to commercial interests (pesticide manufacturers) or that confirm pet theories.
According to The Economist article, “when Dr. Ioannidis ran the numbers through his model, he concluded that even a large, well-designed study with little researcher bias has only an 85% chance of being right. An underpowered, poorly performed study has but a 17% chance of producing true conclusions. Overall, more than half of all published research is probably wrong.” The article concludes: “..he (Dr. Ioannidis) makes a good point—and one that lay readers of scientific results, including those reported in this newspaper, would be well to bear in mind. Which leaves just one question: is there a less than even chance that Dr. Iaonnidis’s (sic) paper itself is wrong?”
Another article in The
Economist discusses the future of scientific publishing: “All this could
change the traditional form of the peer-review process, at least for the
publication of papers. The process is
organized by the publisher but conducted, for free, by scholars. The advantages afforded by the internet mean
that primary data is becoming available freely online. Indeed, quite often the online paper has a
direct link to it. This means that
reported findings are more readily replicable and checkable by other teams of
researchers. Moreover online publication
offers the opportunity for others to comment on the research. Research is also becoming more collaborative
so that, before they have been finalized, papers have been reviewed
by several authors.”6
Finally, it must be kept in mind that many times the details
of publications are not fully examined by readers. The Devil is in the details when it comes to
analyzing research as noted by Richard Lewontin
relating the story of the wonder-rabbi of Chelm, who
had a vision of the fiery destruction of a school in the city of
References: