The State of Bee Science
Bee Culture (January 2006) Vol. 134 (1): 23-25.


Malcolm T. Sanford

I have just returned from the 41st Reunión Nacional de Investigación Pecuaria in Cuernavaca, México.  It was jointly sponsored by several entities, including  the Universidad Autónoma del  Estado De Morelos and the Instituto Nacional de Investigaciones Forestales, Agrícolas y Pecuarias.  Both organizations have a long history of collaborating in animal research.  A wide range of scientific papers was presented over four days on the management of beef and dairy cattle, goats, sheep, pigs and honey bees.  As I wandered the halls attending the talks and perusing the posters, I wondered where would animal producers be if there had been no organized scientific study over the last four decades?

To a person with my training the answer is obvious, but for some of the producers it seemed that all the scientific study simply provided more questions than answers.  I  wrote an article on how bee science is often viewed by researchers and beekeepers in the March  1998 Apis,1 a newsletter that I authored during my tenure at the University of  Florida, and still can be accessed on the World Wide Web:

A panel on bee research was convened at the recent meeting of the American Beekeeping Federation. Billed as what bee researchers want from beekeepers and vice versa, presentations from both sides showed that a substantial divide exists between these groups.  Researchers are primarily driven by the demands of their discipline and administrators.  The latter often require that scientists themselves acquire the substantial funding to carry out their activities from granting or commercial sources.  As for the former, researchers are called on to publish in journals that are peer-reviewed and read by others in their field.  They get little if any credit for publishing in lay magazines.  The practical result of this is that a lot of research is not perceived as directly helping beekeepers.  In addition, much of it continues to be published in places not readily accessible to the lay public.

“Many beekeepers see scientists as employed to solve applied problems and publish the results in accessible trade journals.  They often have little patience for research published in scientific journals, especially that which they perceive has little practical value. A good many researchers, on the other hand, see beekeepers as supplying little, if any, funding. As a consequence, they have little patience for what they often view as complaints by a cadre of folks who are not informed about what really is involved in bee research.

Unfortunately, this conflict sometimes leads to beekeepers becoming fed up with researchers, and vice versa.  In the worst-case scenario beekeepers may accuse researchers of complacency, even complicity, in ignoring their needs. At the same time scientists can lose respect for beekeepers, who they perceive as ungrateful for research even when it does directly affect their livelihood.


“At the convention, several conclusions were reached.  Quality research isn't easy. It takes patience, time, money and adequate controls. In 1985, I wrote an essay in these pages (May 1985) about the latter issue with reference to tracheal mite studies.  It part it read, ‘…no experiment is worth much without a control, an untreated colony in the exact same state genetically, qualitatively (same stores, amount of brood) and infested to the same degree as the colony being treated.  This provides the basis for comparison to show a material's effectiveness.  In bee research, developing effective control colonies is often the most difficult part of an experiment. This is because to be shown to be generally effective, experiments must usually be conducted on a large scale involving a great number of both infested and control colonies.’”2


This brings to mind a recent flap that has mostly escaped the readers of Bee Culture who are not part of the enlarged online beekeeping community that routinely uses electronic communications.  In my electronic Apis Newsletter at the site, I wrote the following about the September 2005 edition of Bee Culture:


“Hans-Otto Johnsen discusses commercial beekeeping in Norway. He describes Varroa control in the country employing artificial swarms and splits, breeding and biotechnical methods.  Much of the article discusses an experiment in small-cell beekeeping.  Discussion of this on the bee-l list revealed a distinct difference between how those who read Bee Culture’s pages deal with the information presented.  Even though several have complained about the methods used, others seemed to care more that the information was published so they themselves could determine what others are doing and do the due diligence on the study’s validity themselves.” 3


Indeed publication of this study also printed in The Beekeepers  Quarterly, a British magazine, resulted in several strong responses.  Some in the bee-l community saw the article as vindicating arguments that small cell size should be further investigated as a  means to control Varroa and other bee maladies.  When I inquired further about this article from some of those who collaborated on the project, I received the following:  “Illegal publication of test results!...It is important for the Norwegian Beekeepers Association to point out that the test is not finished, that the results in the mentioned articles (sic) is taken out of a larger context, and that Johnsen has published some of the preliminary results without the approval of the Norwegian Beekeepers Association.” 


Soon after I wrote that, a rebuttal came from Mr. Johnsen, which I chose to publish in my November issue: “My article is about my surviving as a truly organic beekeeper.  In the concept for me surviving with my 600 colonies, small cell size is a vital part and the figures are mentioned to give the background for why small cell size is important in my concept. The mentioned figures are results from hives which I've got with the design described.”  It is important to note that this quote did not come directly from Mr. Johnsen who apparently does not use a computer and so did not see the original online postings, but indirectly from two other persons who reportedly got feedback through personal communication with him.


This provoked other replies concerning validity of the information reported.  The discussion can be gleaned from the Web and is not the focus of this article.  However, Jim Fischer whose words have graced Bee Culture in the past concluded in one of his replies: “…if one is participating in an organized research effort, it is generally assumed that one will follow a specific protocol, contribute one's data, and let all the data be analyzed before making any possibly rash statements about what is seen in a mere subset of the data.


“It is a shame that the actual paper may be blocked from being published in a peer-reviewed journal due to this ‘pre-publication’ of partial data by one ‘loose cannon’ among the large number of people who participated in the effort.  Peer-reviewed science

journals most often flatly refuse to publish research that has been already reported on by the popular (layman's) press or another journal before being published in their journal.


“The net result may be to take hard work by many people resulting in good hard data, and make it all seem ‘questionable’ or ‘unpublishable’ simply due to this error in judgment by one participant. That's a shame when the goal seems to have been to do a large-scale study and have the results be accepted as ‘Science’ with a capital ‘S’.


“So, it is not about ‘freedom of speech’, its not about ‘turf’, it’s not about ‘ego’, and it’s

certainly not about what any one participant THINKS he might be able to conclude from his hives alone.  It is about doing science, working as a member of a team, and refraining from grandstanding to get one's 15 minutes of fame.  This is expected in any multi-researcher effort. Violate these basic rules, and no one will ever want work with you again in this lifetime.


“That didn't happen.  Two magazines got conned, and so did an entire national beekeeping group. That's sad.”  In addition responding to another statement, he said, “I think it was made clear that it was all about statistical significance.”4


Since the above discussion, several items have come to my attention regarding the scientific publication process.  An article appearing in The Economist took on the topic of scientific accuracy and its relation to statistics.5


“Theodore Sturgeon, an American science-fiction writer, once observed  that ’95% of everthing is crap’.  John Ioannidis, a Greek epidemiologist, would not go that far.  His benchmark is 50%.  But that figure, he thinks, is a fair estimate of the proportion of scientific papers that eventually turn out to be wrong.


“Dr. Ioannidis, who works at the University of Ioannina, in northern Greece, makes his claim in Plos Medicine, an on-line journal published by the Public Library of Science.  His thesis that many scientific papers come to false conclusions is not new.  Science is a Darwinian process that proceeds as much by refutation as by publication.  But until recently no one has tried to quantify the matter.”  Some of the cited studies in Dr. Ioannidis’ work now found to be wrong, according to The Economist article, include safety of hormone replacement therapy, coronary health improvement due to vitamin E. intake, and the relative effectiveness of stents over balloon angioplasty in coronary artery repair. 


A major source of error is an “unsophisticated” reliance on “statistical significance,” according to the article, which says:  “To qualify as statistically significant a result has, by convention, to have odds longer than one in twenty of being the result of chance.  But, as Dr. Ioannidis points out, adhering to this standard means that simply examining 20 different hypotheses at random is likely to give you one statistically significant result.  In fields where thousands of details have to be examined…many seemingly meaningful results are bound to be wrong just by chance.”


Another problem many in bee research can relate to is small sample size.  The greater number of colonies to which experimental treatments are applied, the better will be the resulting information.  However, the more colonies one includes in a study the more difficult and expensive it becomes.  There are also more insidious sources of error, which often can equally affect beekeeper-initiated research listed by Dr. Ioannidis.  These include studies showing “weak effect,” such as a drug that works only on a small number or patients (bee colonies), or poorly-designed research allowing fishing for results beneficial to commercial interests (pesticide manufacturers) or that confirm pet theories.


According to The Economist article, “when Dr. Ioannidis ran the numbers through his model, he concluded that even a large, well-designed study with little researcher bias has only an 85% chance of being right.  An underpowered, poorly performed study has but a 17% chance of producing true conclusions.  Overall, more than half of all published research is probably wrong.”  The article concludes:  ..he (Dr. Ioannidis) makes a good point—and one that lay readers of scientific results, including those reported in this newspaper, would be well to bear in mind.  Which leaves just one question:  is there a less than even chance that Dr. Iaonnidis’s (sic) paper itself is wrong?”


Another article in The Economist discusses the future of scientific publishing: “All this could change the traditional form of the peer-review process, at least for the publication of papers.  The process is organized by the publisher but conducted, for free, by scholars.  The advantages afforded by the internet mean that primary data is becoming available freely online.  Indeed, quite often the online paper has a direct link to it.  This means that reported findings are more readily replicable and checkable by other teams of researchers.  Moreover online publication offers the opportunity for others to comment on the research.  Research is also becoming more collaborative so that, before they have been  finalized, papers have been reviewed by several authors.”6


Finally, it must be kept in mind that many times the details of publications are not fully examined by readers.  The Devil is in the details when it comes to analyzing research as noted by Richard Lewontin relating the story of the wonder-rabbi of Chelm, who had a vision of the fiery destruction of a school in the city of Lublin fifty miles away.  Some time later, all offered sympathy to a visitor from that city, but he said there had been no such event and on hearing the source asked, “what kind of wonder-rabbi is that?”  One of the rabbi’s disciples replied, “Well, burned or not burned, it’s only a detail.  The wonder is he could see so far.”7




  1. Sanford, M.T. Apis Newsletter <>, accessed November 19, 2005.
  2. Sanford, M.T. Apis Newsletter <>, accessed November 19, 2005.
  3. Sanford, M.T. Apis Newsletter at <>, accessed November 19, 2005.
  4. BEE-L Digest - 14 Oct 2005 to 15 Oct 2005 (#2005-275)
  5. The Economist, September 3, 2005, p. 72.
  6. The Economist, September 24, 2005, p. 97.
  7. Lewontin, R.  2000.  It Ain’t Necessarily So: The Dream of the Human Genome and Other Illusions